Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition

Zhe Wang, Limin Wang, Yali Wang, Bowen Zhang and Yu Qiao



Conventional feature encoding scheme (e.g., Fisher vector) with local descriptors (e.g., SIFT) and recent deep convolutional neural networks (CNNs) are two classes of successful methods for image recognition. In this paper, we propose a hybrid representation, which leverages the great discriminative capacity of CNNs and the simplicity of descriptor encoding schema for image recognition, with a focus on scene recognition. To this end, we make three main contributions from the following aspects. First, we propose a patch-level and end-to-end architecture to model the appearance of local patches, called as PatchNet. PatchNet is essentially a customized network trained in a weakly supervised manner, which uses the image-level supervision to guide the patch-level feature extraction. Second, we present a hybrid visual representation, called as VSAD, by utilizing the robust feature representations of PatchNet to describe local patches and exploiting the semantic probabilities of PatchNet to aggregate these local patches into a global representation. Third, based on our VSAD representation, we propose a state-of-the-art scene recognition approach, which achieves excellent performance on two standard benchmarks: MIT Indoor67 (86.2%) and SUN397 (73.0%).


As shown in the Figure above, the whole process consists of three steps, 1) Extracting Patches, 2) Constructing VSAD, and 3) Selecting Codebook.

  • Extracting Patches: We propose a patch-level CNN to model the appearance of local patches, called as PatchNet. PatchNet is trained in a weakly-supervised manner simply with image-level supervision. Experimental results imply that PatchNet is more effective than classical image-level CNNs to extract semantic and discriminative features from local patches.

  • Constructing VSAD: We present a new image representation, called as VSAD, which aggregate the PatchNet features from local patches with semantic probabilities. VSAD differs from previous CNN+FV for image representation on how to extract local features and how to estimate posterior probabilities for features aggregation.

  • Selecting Codebook: We exploit VSAD representation for scene recognition and invest its complementarity to global CNN representations and traditional feature encoding methods. Our method achieves the state-of-the-art performance on the two challenging scene recognition benchmarks, i.e., MIT Indoor67 (86.2%) and SUN397 (73.0%), which outperforms previous methods with a large margin.

  • Results & Analysis

    • Object Scene Relationship

    • Object Responses for MIT indoor

    • Exploration Study on MIT indoor dataset.

    • Evaluation of SIFT feature:

    • Evaluation of scene deep feature:

    • Evaluation of scene, object hybrid deep feature:

    • Comparison with the State of The Art on MIT indoor(left) and SUN397(right)

    • Analysis of our semantic object codebook with its most related scene category:


    If you use our trained model or VSAD feature, please cite the following paper:

    Zhe Wang, Limin Wang, Yali Wang, Bowen Zhang, and Yu Qiao, Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition, in (TIP), 2017

    Last Updated on 4th February, 2017