Saturday, December 7, 2013

Setting up a wireless bridge with DD-WRT

I recently needed to setup an old router as a wireless bridge. The device is running DD-WRT firmware, which allows users to configure it as a bridge pretty easily. But both sets of directions I found on the DD-WRT wiki seem really outdated and way more complex than they need to be. These are the steps I took instead:

  1. Disconnect your computer from any network device it might be connected to - either wired or wireless. Plug your computer into one of your (soon-to-be) bridge's wired LAN ports.
  2. In a web browser, navigate to the bridge's configuration site - http://192.168.1.1/ by default. Go to the Setup → Basic Setup page. Change the Local IP Address to an address other than the local IP address of the router that your bridge will be connecting to. Also be sure that the IP address you pick for your bridge is not in the range of IP addresses that your router's DHCP server might assign to clients. Click the Apply Settings button. If your bridge's local IP address changed, you will need to navigate to that new IP address before continuing.
  3. On the Wireless → Basic Settings page, change Wireless Mode to Client Bridge (Routed). Apply the settings.
  4. On the same page, change the Network Name to the SSID of the network you want your bridge to connect to. Apply the settings.
  5. On the Wireless → Wireless Security page, change the Security Mode to the mode used by your router and enter the router's wifi password. Apply the settings.
  6. On the Administration → Management page, press the Reboot Router button.

Once the bridge reboots, you should be able to connect to the internet with only a wired connection to your bridge. No configuration should be needed on the router you are connecting to, unless your router is configured to use additional security features like MAC address filtering.

One addition step that I've seen online but wasn't needed on my bridge was to go to the bridge's Setup → Basic Setup page and set the Gateway and Local DNS to the router's local IP address. DD-WRT seems to do that automatically when you use the default - 0.0.0.0. Although, that may vary depending on what chipset your bridge uses. My bridge is a Buffalo WZR-HP-300HN, which uses an Atheros chip.

Thursday, August 15, 2013

Working With Git Branches

At a previous job, we switched from using CVS to using Git and I was lucky enough to work with a bunch of guys who were big on Git best practices. My biggest take-away was branching. Branching is great because it gives you a “sandbox” to develop in. It gives you the freedom to commit your work early and often, essentially creating backups as you develop. You can revert to a previous commit if you ever need to. And you can merge in other people’s changes without the risk of losing your work if things don’t go smoothly (it’s bound to happen with multiple developers, no matter which source control system you use.) Think of committing to a branch like saving your code as you type, only at a much higher level.

So how do you branch? Well, let’s start by updating the branch that you will be branching off of. When you clone a repository, you are creating a local copy of the server’s branches, along with those branches’ complete histories. Most of the work you do with Git is done offline, without connecting to the server that you cloned the repo from. That means that your local repo will not be aware of new commits (i.e., commits made by other developers) until you connect to the server and ask for them. That is done with Git’s fetch command.

git fetch

Now your local repo knows about everything that happened remotely since your repo was last updated. But fetching doesn’t automatically update your local branches (e.g., master) to look like they do remotely (e.g., origin/master.) After all, wouldn’t want a source control tool to meddle with your project unless you told it to. So how do you get your local master branch updated to the latest changes you just pulled? Assuming you haven’t made any changes you haven’t committed, get a fresh start by doing a “hard reset” on your local master branch.

git reset --hard origin/master

Now that your master branch is the same as origin/master, create the branch that you will be doing your work on.

git branch new-branch-name
git checkout new-branch-name

You can also do that in one step using

git checkout -b new-branch-name
. A common way to use branching is to create a branch for every feature you work on, which would likely mean that you would have one branch for every ticket that gets assigned to you in a system like JIRA or Version One. Doing so allows you to work on multiple features independently, without intertwining their code.

Now that you are on your newly created feature branch, work on your feature and commit your changes any time you want a “checkpoint” your progress.

Once you’re ready to check in your work, you’ll want to make sure that your changes will play well with any changes people have committed to origin/master in the mean time.

  1. Make sure that all of your changes have been committed to your branch. If not, commit them or forever hold your peace.
  2. git fetch
    Just like before, this retrieves all of the changes that have been made remotely but doesn’t affect your local branches or workspace.
  3. git merge origin/master
    merges those changes into the current branch. If Git is able to merge without any conflicts it can’t resolve on its own, it will commit the result of the merge. If there are conflicts it can’t resolve, your branch will be left in a “merging” state until you resolve the conflicts manually, “add” those conflicting files and commit. Before moving on, make sure that the changes you brought in from origin/master don’t cause any problems that keep the project from compiling or running as expected. That may mean updating your project’s dependencies or even updating your code to address API changes that may have happened elsewhere in the project. If you end up making changes, be sure to commit those new changes before continuing.
  4. Now that your branch is up to date, it’s time to merge your changes back into master and push them to origin. In a sense, we really just use our local master branch as a staging area for pushing changes to origin. Switch to your master branch using
    git checkout master
    . Since we haven’t been developing on the master branch (i.e., it doesn’t have changes that haven’t been pushed to origin) and haven’t been bothering to keep it up to date, rather than merging in the latest changes from origin, reset it using
    git reset --hard origin/master
    .
  5. Merge your feature changes into your freshly reset master branch using
    git merge --squash feature-branch-name
    . Since your feature branch was just updated with the latest changes from origin/master (the same commit that you just reset your local master branch to,) Git shouldn't have any problem performing the merge/squash and you should see that it was performed as a “fast forward” squash.
    On a side note, notice that we are using the squash flag. Without it, every commit that was made on the feature branch would be “brought over” to master and would show up in its history. That’s bad for two reasons. First, from a best practice standpoint, the master branch’s history should read like a list of features that were committed atomically, not cobbled together with a dozen ten line commits. To put it another way, master is not your sandbox – branches are. And second, from a more practical standpoint for people using code review tools like Gerrit, without squashing, every one of those commits pushed to origin will need to be code reviewed separately.
  6. When you perform a merge using the squash flag, it does not commit the changes for you. That’s good because it gives you an opportunity to write a nice little commit message, summarizing all the hard work you did on your feature branch (with a Gerrit change-id if needed) before pushing your changes to origin. Commit your changes with
    git commit
    .
  7. Now your local master branch has all of your feature branch changes in a single commit that’s ready to be pushed to master. Do so with
    git push
    . If the remote Git server rejects your push, look at the error message. It’s possible that someone beat you to the punch. That sucks but it’s simple to fix. Checkout your feature branch again (
    git checkout branch-name
    ) and repeat this process starting with step 2.

Sunday, January 6, 2013

JPA alternatives to Hibernate UserTypes


A couple years ago, I started a side project that was a great way to get more experience with Hibernate and some of its advanced features. I want to put my webapp out there for the world to use but since I don't expect it to make much money (if any,) I decided that Google App Engine (GAE) would be a great place to host it. I can host my app for free and if the site catches on, I can pay the fees and re-examine hosting options. However, to access their data storage from Java, you need to use either JDO or JPA. Since JPA is the newer of the two and was supposedly modelled after Hibernate, it was the obvious choice. I started porting my app to JPA using Hibernate as my implementation and I plan on migrating the whole app to GAE (and its JPA implementation, DataNucleus) at a later date.

My experience converting from Hibernate to JPA was fairly smooth but the biggest shortcoming was JPA's lack of a UserType equivalent. UserTypes allow developers to persist values that are of a type the framework doesn't handle natively. I was using them to handle enums and Joda Time objects. For example, suppose I have a table where I store user information. One piece of information I might want to store is a user's gender. In Java, I would represent that with an enumeration like this:

public enum Gender {
  MALE('M'),
  FEMALE('F');

  private Character code;
  private static final Map<Character,Gender> valuesByCode;
 
  static {
    valuesByCode = new HashMap<Character,Gender>();
    for(Gender gender : values()) {
      valuesByCode.put(gender.code, gender);
    }
  }

  private Gender(Character code) {
    this.code = code;
  }

  public static Gender lookupByCode(Character code) { 
    return valuesByCode.get(code); 
  }

  public Character getCode() {
    return code;
  }
}

The 'code' field stores the value I want to use in the database. Since I'm not relying on an enum's 'name' field (which is automatically determined by the value's name in code - MALE and FEMALE in this case,) my Java and SQL naming conventions don't need to match up and I don't have to worry about breaking my app with basic refactoring. In Hibernate, I would have written a UserType to handle getting an enum's 'code' and looking up a enum by its code. But in JPA, I had to find an alternative.

One approach I tried was to leverage JPA's @Pre/PostUpdate, @Pre/PostPersist and @PostLoad annotations to populate, update and clear a JPA-friendly field I added where needed.

private Gender gender;
private String genderCode;

@PrePersist
@PreUpdate
public void populatePersistanceFields() {
  genderCode = gender==null? null : gender.getCode();
}

@PostPersist
@PostUpdate
public void cleanupPersistanceFields() {
  genderCode = null;
}

@PostLoad
public void updateFromPersistanceFields() {
  gender = ConcentrationUnit.lookupByCode(genderCode);
  cleanupPersistanceFields();
}

When I first tested my code, I was happy to find that an entity could in fact be persisted with the value that had been populated by my methods. However, what I quickly realized is that my JPA implementation (Hibernate) wasn't recognizing changes made to existing entities. So that approach wouldn't work.

So what I settled on instead was to create an additional getter and setter for fields that need to be converted. So far, using Hibernate, this approach has worked well.

@Entity @Table(name="person")
@Access(AccessType.FIELD)
public class Person {
  @Transient
  private Gender gender;

  @Transient
  public Gender getGender() {
    return gender;
  }
  public void setGender(Gender gender) {
    this.gender = gender;
  }

  @Column(name="gender", nullable=false) @Access(AccessType.PROPERTY)
  protected Character getGenderCode() {
    return gender==null? null : gender.getCode();
  }
  protected void setGenderCode(Character genderCode) {
    gender = Gender.lookupByCode(genderCode);
  }
}

This example mixes FIELD and PROPERTY access, which isn't necessarily recommended, but I prefer field access and I'm considering this a special case. Hibernate allows it so unless DataNucleus doesn't, I'm going to stick with it to prevent having annotations scattered through my model classes.

It's not elegant by any means but it seems like an acceptable solution to a major shortcoming in JPA. I'm curious what other "pure JPA" (ie: not using implementation-specific features like Hibernate's @Type annotation) solutions people have found.

Monday, September 19, 2011

Wicket Form Submission in IE

No matter how much developers hate it, Internet Explorer is here to stay. And one of its fun little idiosyncrasies is how it handles form submission when a user presses [enter] with a text box selected. In most browsers, doing that would cause the form to be submitted as if it had been submitted by pressing the form's first submit, with the submit's name and value in the request. However, IE does not do that. This problem is universal in web development but the result in Wicket is that the Form's onSubmit() method gets called but none of the Button's onSubmit() methods do. If the Button's onSubmit() method is the one that deals with the data that was submitted, the user isn't going to get the result they expect and it's entirely possible that the information they entered may be lost.

In looking for a way to solve this, I ran across Wicket's setDefaultButton() method. However, that really just changes the rendered HTML in a best-effort attempt to override which button is used by browsers that do submit one by default. But since IE doesn't do that to begin with, setDefaultButton() won't have any effect.

Thankfully, IE's form submission problem can be addressed reliably in Java. The approach I took was to override a Form's delegateSubmit() method. Unless you have default form processing turned off, delegateSubmit() is called after validation passes and the models have been updated, but before any of the onSubmit methods are called. The IFormSubmittingComponent (typically a Button of some sort) that submitted the form is passed in as a parameter and this is the perfect opportunity to set one if there isn't one!

public class DefaultingForm extends Form {
    // Constructors go here...

    protected void delegateSubmit(IFormSubmittingComponent submittingComponent) {
        if(submittingComponent==null) {
            submittingComponent = getDefaultButton();
        }
        super.delegateSubmit(submittingComponent);
    }
}

You could set submittingComponent to any value you want but be warned - since that only happens when no submit is sent by the browser, any conditional logic you put in the if-block may only be seen by IE users. That kind of browser-specific behavior could be considered just as bad as the behavior we are trying to remedy! Instead, using the value of getDefaultButton() helps ensure that all browsers will get the same result. Browsers that choose a default button will benefit from the markup changes setDefaultButton() causes. And IE will benefit by having the same button chosen for it. I'm not sure why an approach like this wouldn't be used by Wicket itself. Thoughts?

Thursday, September 8, 2011

Configuring an OpenCV 2 project in Eclipse

I recently picked up a book on OpenCV 2 but all the setup instructions are mostly geared toward developing in either Visual Studio or Qt. Since I mostly do Java development, I would prefer to use Eclipse and I happen to use a Mac at home. And I haven't done much C++ development in Eclipse so I wasn't sure how to get things setup. Setup is actually pretty easy once you know the right values. I've seen some incorrect (maybe just outdated?) info about this on the internet so hopefully this updated info will be useful for people like myself who are new to OpenCV, as well as the C++/Eclipse/Mac combo.

Once you have CDT installed, download the OpenCV source from their site, opencv.willowgarage.com. OpenCV uses a tool called CMake to generate make scripts for the build tool of your choice. Since I'm not very familiar with Xcode, I followed the Linux instructions, where you generate files for make. Using the default build configuration in CMake has worked well for me so far. In a Terminal, navigate to the directory where the Makefile was generated and run "make". Then run "sudo make install" to copy the headers and dylibs where they belong, in /usr/local/include/ and /usr/local/lib/ respectively.

Once those are built and installed, you can creata a C++ project in Eclipse. Then, the trick to getting an OpenCV project to compile and run is to go to the Project Properties and select Settings, under C/C++ Build. On the Tool Settings tab, under MacOS X C++ Linker, select Libraries. Add /usr/local/lib as a library search path. Then add the following as libraries:

  • opencv_calib3d
  • opencv_contrib
  • opencv_core
  • opencv_features2d
  • opencv_flann
  • opencv_gpu
  • opencv_highgui
  • opencv_imgproc
  • opencv_legacy
  • opencv_ml
  • opencv_objdetect
  • opencv_ts
  • opencv_video

Those values correspond to files in the library search path but with the "lib" prefix and ".dylib" extension stripped off. So if your build relies on libfoo.dylib, you would simply add "foo" as a library.

You may also notice that the OpenCV build process creates quite a few files in the library directory. Most of those are aliases/links that allow you to be as specific (or not) as you want about which version to use. A library like opencv_core should point to the most recent version. But the most recent version tomorrow might have a totally different API than it did yesterday, so watch out. The opencv_core.2.3 library points to the most recent 2.3.x release. And for me, as of right now, opencv_core.2.3.1 is that most recent version.

Sunday, March 13, 2011

Configuring Jetty 7 programatically using Wicket and Spring

UPDATE: I don't know why I couldn't get this running before (maybe the example in the documentation was updated?) but I got a Wicket servlet filter application working in an embedded Jetty 7 server that was configured using the existing web.xml. This is all you need prior to calling start().

Server server = new Server();
server.setStopAtShutdown(true);
server.setGracefulShutdown(ALLOWED_SHUTDOWN_TIME);

SocketConnector connector = new SocketConnector();
connector.setPort(8080);
server.addConnector(connector);

WebAppContext context = new WebAppContext();
context.setDescriptor("config/web.xml");
context.setResourceBase("build");
context.setContextPath("/");
context.setParentLoaderPriority(true);
server.setHandler(context);

Anyway, here's the rest of my original post:


I recently spent some time beating my head against a rock, trying to get my Wicket- and Spring-based web application running in an embedded Jetty 7 server. Running an embedded server through the debugger is a convenient way to debug a web application. And if you configure it to run the application from the project's build directory, it also has the added benefit of picking up changes on the fly.

Anyway, although I haven't seen any announcements about it, it seems that Jetty was adopted by (or is at lease now in cahoots with) Eclipse, as of version 7. The problem is that the API has changed a fair amount and updated examples are hard to come by. I've even had trouble with some of the examples on the Jetty site.

However, with a bit of tinkering, I came up with the code below. It adheres to the recommended practice of using the Wicket servlet filter instead of the Wicket servlet. And in this case, I am using the SpringWebApplicationFactory to configure my application through Spring.

public class JettyServer {
 private static int DEFAULT_MAXIMUM_IDLE_TIME = 1000*60*60;
 private static int ALLOWED_SHUTDOWN_TIME = 1000*5;

 public static void main(String[] args) {
  Server server = new Server();
  server.setStopAtShutdown(true);
  server.setGracefulShutdown(ALLOWED_SHUTDOWN_TIME);
  
  SocketConnector connector = new SocketConnector();
  connector.setPort(8080);
  // Settings from the Wicket quickstart archetype...
  connector.setMaxIdleTime(DEFAULT_MAXIMUM_IDLE_TIME);
  connector.setSoLingerTime(-1);
  server.addConnector(connector);

  ServletContextHandler servletContextHandler = new ServletContextHandler(ServletContextHandler.SESSIONS);
  servletContextHandler.addServlet(DefaultServlet.class, "/");
  servletContextHandler.setAttribute("contextConfigLocation","classpath:spring/spring-config.xml");
  server.setHandler(servletContextHandler);

  FilterHolder filterHolder = new FilterHolder(new WicketFilter());
  filterHolder.setInitParameter("applicationFactoryClassName", "org.apache.wicket.spring.SpringWebApplicationFactory");
  servletContextHandler.addFilter(filterHolder, "/*", FilterMapping.REQUEST);

  servletContextHandler.getInitParams().put("contextConfigLocation", "classpath:spring/spring-config.xml");
  servletContextHandler.addEventListener(new ContextLoaderListener());

  try {
   server.start();
   System.out.println("Started Jetty.  Press [return] to shutdown.");
   System.in.read();
   System.out.println("Stopping Jetty..."); 
   server.stop();
   server.join();
  } catch (Exception e) {
   e.printStackTrace();
   System.exit(100);
  }
 }
}

Tuesday, February 1, 2011

Using Sass in an Ant build

I recently started using Sass in a side project I've been working on.  It allows me to do two neat things:

  1. Declare my site's colors as variables and re-use them everywhere.
  2. Use only meaningful classes in my markup and style those classes appropriately without copying and pasting a lot of CSS.  For example, if I want an invoice to be displayed in a groupbox, I can put the invoice data inside of a div and assign "invoice" as its class.  Then I can use the Sass mixin feature to say that the invoice class should be styled like a groupbox, whose style I define only once.

However, Sass is a Ruby tool and I'm working in Java.  The project uses Ant for its builds and I wanted a way of compiling my Sass code into CSS in an automated way.  What I ended up doing was using Ant's <apply> task to find all the .sass and .scss files in my project and run them through the Sass processor.  This is what the task ended up looking like:

<target name="sass-compile" depends="properties">
    <apply executable="sass" dest="${project.src.dir}" verbose="true" force="true" failonerror="true">
        <srcfile />
        <targetfile />

        <fileset dir="${project.src.dir}" includes="**/*.scss,**/*.sass" excludes="**/_*" />
        <firstmatchmapper>
            <globmapper from="*.sass" to="*.css" />
            <globmapper from="*.scss" to="*.css" />
        </firstmatchmapper>
    </apply>
</target>

Everything is pretty straight forward with one possible exception.  By convention, partials (shared files that are only intended to be imported by other Sass files) begin with an underscore.  I excluded them from the fileset since they aren't intended to be used directly and my build automatically rolls CSS files into the .war file.

But there's one more import-related caveat.  When used with a mapper, the <apply> task only runs the executable on input files that have been modified more recently than the corresponding destination file.  That's a problem because a Sass file might not have changed but one of its imports could have, in which case it should be re-compiled.  To make sure the build happens every time, I used force="true".