We present a method for biologically-inspired object recog- nition with one-shot learning of object appearance. We use a computa- tionally efficient model of V1 keypoints to select object parts with the highest information content and model their surroundings using simple colour features. This map-like representation is fed into a dynamical neu- ral network which performs pose, scale and translation estimation of the object given a set of previously observed object views. We demonstrate the feasibility of our algorithm for cognitive robotic scenarios and eval- uate classification performance on a dataset of household items.