OakInk_logo.png


OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

1Shanghai Jiao Tong University, 2Shanghai Qi Zhi Institute
* Equal contribution    ✉ Corresponding author
CVPR 2022
overview

OakInk consists of 1) OakBase of object affordance and 2) InkBase of hand's interaction.

Update

  • Jun,19,2024:   OakInk2 @ CVPR2024 is released !
  • Apr,04,2024:   OakInk could be easily cloned from Huggingface/Dataset at OakInk-v1
  • Dec,11,2023:   Grasp Generation models on OakInk-Shape are released!
  • Feb,11,2023:   OakBase is released!
  • Jan,03,2023:   Hand Mesh Recovery models on OakInk-Image are released!
  • Oct,18,2022:   OakInk public v2.1 is released!
    expand to see details Within this update, several artifacts have been fixed, including: wrong poses, time delay, and contact surface mismatches; NOTE: If you downloaded the OakInk dataset before 11:00 AM October 18, 2022, UTC, You only need to replace the previous anno.zip by this newly released: anno_v2.1.zip (access via Google Forms), unzip it and keep the same file structures as before, and install the latest OakInk Toolkit.
  • Jul,26,2022:   Tink has been made public.
  • Jun,28,2022:   OakInk public v2 & OakInk Toolkit -- a Python dataloader, are released!
  • Mar,03,2022:   OakInk got accepted by CVPR 2022.

About

OakInk contains three datasets:
  • OakBase: Object Affordance Knowledge (Oak) base, including objects' part-level segmentation and attributes.
  • OakInk-Image: a video dataset with 3D hand-object pose and shape annotations.
  • OakInk-Shape: a 3D grasping pose dataset with hand and object mesh models.
The OakInk-Image contains 230K image frames that capture a total of 12 subjects performing up to 5 intent-oriented interactions with 100 objects from 32 categories. The object poses were captured from a MoCap system, while the MANO hand poses are fitted from 2D keypoint annotation. Based on the hand-object poses from real-world human demonstrations, we transfer the hand pose on the real-world object to the virtual objects with similar affordances through a interaction transfer module: Tink. All the real-world and transferred interactions constitute the geometry-based dataset: OakInk-Shape. The OakInk-Shape contains 50K different hand-object poses and models.

Download

OakInk

Download at: Hugging Face.
For researchers in China, you can download OakInk from the alternative mirror: 百度云盘 (hrt9)
After download all the files, you need to complete the Google Form to get the annotation file.
Arrange all zip files into the directory, eg.$OAKINK_DIR/zipped as follow
$OAKINK_DIR/zipped
  ├── OakBase.zip
  ├── image
  │   ├── anno_v2.1.zip  # access via Google Forms
  │   ├── obj.zip
  │   └── stream_zipped
  │       ├── oakink_image_v2.z01
  │       ├── ...
  │       ├── oakink_image_v2.z10
  │       └── oakink_image_v2.zip
  └── shape
      ├── metaV2.zip
      ├── OakInkObjectsV2.zip
      ├── oakink_shape_v2.zip
      └── OakInkVirtualObjectsV2.zip
and follow the instruction to verify checksums and unzip the files.

Resources

Details of dataset annotations, please refer to Data documentation.
Details of dataset splits for various tasks, please refer to Data splitting.
Load OakBase and visualize object parts and attributes, please refer to demo_oak_base.py.
Load OakInk-Image and OakInk-Shape for visualization, please refer to Load and visualize.
Train hand mesh recovery models on OakInk-Image, please refer to OakInk-HMR.
Train grasp generation models on OakInk-Shape, please refer to OakInk-Grasp-Generation.

BibTeX

@InProceedings{YangCVPR2022OakInk,
    author = {Yang, Lixin and Li, Kailin and Zhan, Xinyu and Wu, Fei and Xu, Anran and Liu, Liu and Lu, Cewu},
    title = {{OakInk}: A Large-Scale Knowledge Repository for Understanding Hand-Object Interaction},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2022},
}