Tactile pose estimation is an active area of research in the field of robotic manipulation. With the advent of high-resolution vision-based tactile sensors, robots are now able to acquire detailed tactile information within their grippers, which can be used for in-hand pose estimation. However, current methods typically rely on prior models of the grasped objects or need extensive training to generalize to diverse objects. In this study, we explore the potential of using active inference for tactile pose estimation that adapts to various objects without vast training. We first validate our approach on a single object to assess whether active inference can be effectively applied to in-hand pose estimation. Subsequently, we test our approach on multiple objects to evaluate its generalizability to unseen objects. Our methodology was assessed through a simple tilt estimation task in a simulated environment.