Intel shows AI model for generating three-dimensional 360-degree images

Spread the love

Intel has shown its Latent Diffusion Model for 3D. This is an AI model that can be used to generate 3D images. According to Intel, it is the first model that can generate a depth map to create ‘vibrant’ three-dimensional 360-degree images.

According to Intel this LDM3D model is able to create a depth map and three-dimensional image from a text prompt ‘with almost the same amount of parameters’ as existing models that generate 3D images. The manufacturer claims that the relative depth for each pixel is accurately estimated, which should save developers “a lot of time when developing scenes.” As examples, Intel mentions that this model can be used to create virtual reality games and virtual museums.

The dataset for LDM3D consists of 10,000 samples from the LAION-400M database. It consists of 400 million images and associated descriptions. The Dense Prediction Transformer model is used to estimate the relative depth of the pixels. Based on the generated 2D image and depth map, Blockade Labs’ integrated DepthFusion application then creates a 360-degree projection. The latter AI company shows in a video on Twitter some examples of 360-degree environments generated with this model. Intel has made LDM3D open source via HuggingFace.

An example of a 360-degree image generated by LDM3D from the text prompt above

You might also like