RealityScan: A New App for Creating 3D Models from Photos

Epic Games, the creators of Unreal Engine and Fortnite, have partnered with Capturing Reality, a leading developer of photogrammetry software, to bring you RealityScan, a free app that lets you create stunning 3D models from photos on your Android device.

RealityScan Is Now Also Available for Android

How RealityScan Works

The app, called RealityScan, is free to download and use, and it works by taking multiple photos of an object or a scene from different angles and then processing them into a 3D mesh that can be exported to Unreal Engine or other 3D applications.

RealityScan app

Features and Benefits of RealityScan App

RealityScan is designed to be easy and intuitive to use, and it offers various features such as automatic alignment, color correction, texture generation, and mesh optimization. Users can also preview their 3D models in augmented reality (AR) mode or share them online with other creators.

RealityScan app

Epic Games’ Vision for 3D Content Creation

According to Epic Games, RealityScan is part of their vision to democratize 3D content creation and make it accessible to everyone. The app is also compatible with Unreal Engine’s MetaHuman Creator, a tool that allows users to create realistic digital humans in minutes.

Availability and Compatibility of RealityScan App

RealityScan is currently available for Android devices that support ARCore, and it requires at least 4 GB of RAM and 64 GB of storage. Epic Games and Capturing Reality plan to release an iOS version of the app in the future.

What the Developers Say

In a press release, Marc Petit, General Manager of Unreal Engine at Epic Games, said: “We’re thrilled to partner with Capturing Reality to bring RealityScan to the Unreal Engine community. This app is a game-changer for anyone who wants to create high-quality 3D models from photos, whether they are hobbyists, professionals, or students.”

Martin Bujnak, CEO of Capturing Reality, added: “RealityScan is the result of years of research and development in photogrammetry. We’re excited to collaborate with Epic Games and leverage their expertise in 3D graphics and AR technology. We believe that RealityScan will open up new possibilities for 3D content creation and storytelling.”

How to Get Started with RealityScan

If you want to try out RealityScan for yourself, you can download it from the Google Play Store or visit the official website for more information.

Resources:

RealityScan | Free to download 3D scanning app for mobile – Unreal Engine

RealityScan is now available for Android devices! – Unreal Engine

More articles: https://www.salamaproductions.com/news/

MetaHuman Animator: A breakthrough in facial animation

MetaHuman Creator, the groundbreaking tool that allows anyone to create realistic digital humans in minutes, has just released a new feature set that takes facial animation to the next level. MetaHuman Animator is now available for all users of Unreal Engine, the world’s most open and advanced facial animation tool.

A showcase of MetaHuman Animator: Blue Dot

One of the most impressive examples of what MetaHuman Animator can do is Blue Dot, a short film created by Epic Games’ 3Lateral team in collaboration with local Serbian artists, including renowned actor Radivoje Bukvić, who delivers a monologue based on a poem by Mika Antic. The performance was filmed at Take One studio’s mocap stage with cinematographer Ivan Šijak acting as director of photography.

Blue Dot demonstrates the level of fidelity that artists and filmmakers can expect when using MetaHuman Animator with a stereo head-mounted camera system and traditional filmmaking techniques. The team was able to achieve this impressive level of animation quality with minimal interventions on top of MetaHuman Animator results. You can watch the short film here:

Blue Dot: A 3Lateral Showcase of MetaHuman Animator

How does it work?

MetaHuman Animator enables you to capture an actor’s performance using an iPhone or a stereo head-mounted camera system (HMC) and apply it as high-fidelity facial animation on any MetaHuman character, without the need for manual intervention . With it, you can capture the individuality, realism, and fidelity of your actor’s performance, and transfer every detail and nuance onto any MetaHuman.

MetaHuman Animator is designed to be easy to use, fast, and accurate. You can record your performance using the Live Link Face app on your iPhone, or use a compatible HMC system such as Faceware or Dynamixyz. Then, you can stream the data directly into Unreal Engine via Live Link, and see your MetaHuman character come to life in real time. You can also record the data and edit it later using Sequencer, Unreal Engine’s cinematic editing tool.

MetaHuman Animator Now Available

What can you do with it?

MetaHuman Animator is not only a powerful tool for creating realistic facial animation, but also a flexible and creative one. You can mix and match different MetaHumans and performances, and even blend them with other animation sources such as motion capture or keyframes. You can also adjust the intensity and timing of the facial expressions using curves and sliders. The possibilities are endless.

MetaHuman Animator is a game-changer for anyone who wants to create high-quality digital humans for games, films, TV shows, or any other project that requires realistic facial animation. It is also a great way to experiment with different emotions, personalities, and styles of acting. Whether you are a professional animator, a hobbyist, or a student, MetaHuman Animator will help you bring your vision to life.

Aaron Sims Creative and Ivan Šijak on Using MetaHuman Animator

How to get started?

MetaHuman Animator is now available for free for all Unreal Engine users. To get started, you need to download Unreal Engine 5 Early Access or Unreal Engine 4.27 Preview from the Epic Games Launcher, and sign up for MetaHuman Creator at https://www.unrealengine.com/en-US/metahuman-creator.

You can also find more information and tutorials on the Unreal Engine website .

How to Use MetaHuman Animator in Unreal Engine

Resources:

More articles: https://www.salamaproductions.com/news/

MetaHuman Animator facial animation

MetaHuman Animator facial animation

NVIDIA ACE for Games

NVIDIA Kairos Demo Shows the Future of NPCs with Generative AI

NVIDIA has unveiled a stunning demo that showcases how generative AI can bring life and intelligence to virtual characters in games through NVIDIA ACE for games. The demo, called NVIDIA Kairos, features Jin, an NPC who runs a ramen shop and interacts with players using natural language and realistic facial expressions.

NVIDIA ACE for Games Sparks Life Into Virtual Characters With Generative AI

NVIDIA Kairos was created using NVIDIA Avatar Cloud Engine (ACE) for Games, a new service that allows developers to build and deploy customized speech, conversation, and animation AI models for NPCs. NVIDIA ACE for Games leverages NVIDIA’s expertise in AI and game development to provide optimized AI foundation models, such as:

  • NVIDIA NeMo, which enables developers to build and customize large language models that can reflect the character’s personality, backstory, and context. Developers can also use NeMo Guardrails to prevent counterproductive or unsafe conversations with NPCs.
  • NVIDIA Riva, which provides automatic speech recognition and text-to-speech features to enable live speech conversation with NPCs in any language.
  • NVIDIA Omniverse Audio2Face, which generates expressive facial animation for NPCs from just an audio source. Audio2Face supports Unreal Engine 5 and MetaHuman characters.

NVIDIA collaborated with Convai, an NVIDIA Inception startup that specializes in conversational AI for virtual worlds, to integrate NVIDIA ACE for Games modules into their platform. Convai’s platform enables developers to create and deploy AI characters in games and virtual worlds with ease.

“Generative AI has the potential to revolutionize the interactivity players can have with game characters and dramatically increase immersion in games,” said John Spitzer, vice president of developer and performance technology at NVIDIA. “Building on our expertise in AI and decades of experience working with game developers, NVIDIA is spearheading the use of generative AI in games.”

NVIDIA Kairos was rendered in Unreal Engine 5 using the latest ray-tracing features and NVIDIA DLSS. The demo was announced at COMPUTEX 2023, along with other innovations such as NV ACE for Games, Omniverse, and RTX. For more information, visit the NVIDIA website.

NVIDIA ACE for games

NVIDIA ACE for games

References:

NVIDIA Omniverse ACE

NV ACE For Games – Spark Life Into Virtual Characters With Generative AI

Generative AI Sparks Life into Virtual Characters with NV ACE for Games

NV ACE for Games Sparks Life Into Virtual Characters With Generative AI

More articles: https://www.salamaproductions.com/news/

NVIDIA ACE for games

Google Bard AI Chat Launch

Google Launches Bard, a New AI Experiment that Lets You Chat with LaMDA

Google Bard AI chat, formerly known as LaMDA AI, which was a private research project inside Google and not available to the public, is now available to request access. Bard is a more powerful and versatile language model than GPT when it comes to writing code. It has many other advantages, such as access to more information, better understanding of context, and more creativity. It is still under development, but it has the potential to revolutionize the way programmers write code. We have already requested access and were lucky to be accepted instantly, giving us the chance to try Bard early.

Google has launched a new AI experiment called Bard, which lets users chat with LaMDA, the company’s language model for dialogue applications.

What is LaMDA?

LaMDA is a deep learning model that can generate natural language responses for open-ended conversations. It was introduced by Google in 2021 as a way to make information more accessible and engaging for users.

LaMDA is trained on a large corpus of text from various sources, such as books, web pages, and social media posts. It can handle different topics, tones, and contexts, and it can adapt to the user’s preferences and goals.

LaMDA is also designed to be safe and aligned with Google’s principles for responsible AI. It has mechanisms to avoid generating harmful or misleading content, such as filters, feedback loops, and human oversight.

How does Bard work?

Bard is a direct interface to LaMDA that allows users to interact with it using natural language queries and commands. Users can sign up to try Bard at bard.google.com and start chatting with LaMDA on various topics and tasks.

Bard can help users with productivity, creativity, and curiosity. For example, users can ask Bard to write a poem, summarize an article, generate a logo, or find the best deals for a product. Bard can also answer questions, explain concepts, or spark ideas.

Bard often provides multiple drafts of its response so users can pick the best starting point for them. Users can also ask follow-up questions or request alternatives from Bard. Bard learns from user feedback and behavior to improve its responses over time.

Why is Bard important?

Bard is an early experiment that showcases the potential of conversational AI to enhance human capabilities and experiences. It also demonstrates the collaboration between Google and its partners, such as OpenAI, which developed GPT-4, another large language model that powers Bing.

Bard is currently available as a preview for users in the U.S. and the U.K., and Google plans to expand it to more countries and languages in the future. Google also welcomes feedback from users and experts to improve Bard and make it more useful and trustworthy.

Bard is a remarkable example of how AI can make information more accessible and engaging for users. It also challenges the traditional search engine model and embraces a more natural and interactive way of finding and creating content.


Resources:

https://bard.google.com/
https://blog.google/technology/ai/try-bard/
https://www.zdnet.com/article/how-to-use-google-bard-now/
https://www.wizcase.com/download/google-bard/

https://www.theverge.com/2023/4/21/23692517/google-ai-bard-chatbot-code-support-functions-google-sheet

More articles: https://www.salamaproductions.com/news/

Google Bard AI chat

Google Bard AI chat

New Bing is Powered by GPT-4

The New Bing is Powered by GPT-4, the Latest AI Breakthrough from OpenAI

Bing, the search engine from Microsoft, has undergone a major upgrade with the help of GPT-4, the latest and most advanced artificial intelligence system from OpenAI.

What is GPT-4?

GPT-4, which was announced by OpenAI on March 14, 2023, is a deep learning model that can generate natural language responses for a variety of tasks, such as answering questions, chatting, writing, and creating. It is based on a massive amount of data and computation, and it can learn from human feedback and real-world use.

GPT-4 is the successor of GPT-3, which was released in 2020 and was widely considered as a breakthrough in natural language processing. GPT-4 surpasses GPT-3 in its broader general knowledge, problem solving abilities, creativity, collaboration, and visual input.

How does Bing use GPT-4?

Microsoft confirmed that the new Bing is running on GPT-4, which it has customized for search. The new Bing allows users to search, answer, chat, and create at Bing.com, using natural language queries and commands. For example, users can ask Bing to write a poem, summarize an article, generate a logo, or find the best deals for a product.

The new Bing also benefits from the continuous improvements that OpenAI makes to GPT-4 and beyond. According to OpenAI, GPT-4 is safer and more aligned than its previous versions, and it can produce more accurate and factual responses. It also has more creativity and collaboration capabilities, and it can handle visual input and longer context.

How can I try the new Bing?

The new Bing is currently available as a preview for users who sign up at Bing.com. Microsoft said that it will update and improve the new Bing based on community feedback and user behavior.

The new Bing is a remarkable example of how AI can enhance human capabilities and experiences. It also showcases the collaboration between Microsoft and OpenAI, which have partnered since 2019 to accelerate the development and adoption of AI technologies.

new Bing GPT-4

new Bing GPT-4

new Bing GPT-4


Resources:

GPT-4 – OpenAI

GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.

Confirmed: the new Bing runs on OpenAI’s GPT-4 | Bing Search Blog

Microsoft invests $1 billion in OpenAI to pursue artificial general intelligence – The Verge

More articles: https://www.salamaproductions.com/news/

Epic Games Store Self-Publishing!

Epic Games Store Launches Self-Publishing Tools for Game Developers and Publishers

Epic Games has announced the launch of self-publishing tools for the Epic Games Store, allowing game developers and publishers to release their games on the platform with more efficiency and control. The self-publishing tools are available to any developer or publisher with games meeting the store’s requirements, which include supporting achievements, crossplay, age ratings, and prohibited content.

The self-publishing tools are accessible through the Epic Developer Portal, where developers and publishers can create and manage their store pages, upload builds, set prices, request ratings, and monitor performance. The Epic Developer Portal also provides access to other Epic services, such as Epic Online Services for cross-platform features, Unreal Engine for game development, and Support-A-Creator for affiliate marketing.

By distributing their games on the Epic Games Store, developers and publishers can reach a global audience of over 230 million users across 187 countries with 16 languages supported. They can also benefit from the 88/12 revenue split offered by Epic Games, as well as the option to use their own or a third-party payment solution to receive 100% of the revenue from in-app purchases. Additionally, Unreal Engine royalties are waived for in-store purchases using Epic’s payment processor.

The Epic Games Store also offers various features to drive player engagement and retention, such as wishlists, achievements, store-wide promotions, and more. The store supports 76 payment methods with 47 regional currencies and more on the way. Users can also load up their Wallet with funds to spend on products and services in the store, now available in more than 140 countries.

Epic Games hopes that the self-publishing tools will lead to “exponential growth in our catalog of games and apps” and empower developers and publishers to bring their creative visions to life. The self-publishing tools are now available for registered developers on the Epic Games website¹². For more information and resources on the self-publishing tools, visit the Epic Games blog³ or watch the tutorial video⁴.

The Store is now open to all developers and publishers!

Epic Games Store self-publishing


Resources:

¹: https://store.epicgames.com/en-US/distribution
²: https://www.unrealengine.com/en-US/blog/self-service-publishing-now-available-for-the-epic-games-store
³: https://store.epicgames.com/en-US/news/epic-games-store-launches-self-publishing-tools-for-game-developers-and-publishers
⁴: https://dev.epicgames.com/community/learning/tutorials/WznG/epic-games-store-publishing-tools-overview

Additional Resources:

Self-service publishing now available for the Epic Games Store!
Release Your Game on the Epic Games Store

More articles: https://www.salamaproductions.com/news/

Unreal Engine 5 Caustics


Unreal Engine 5 NvRTX Caustics Branch Released

Thanks to Epic Games and Nvidia, Unreal Engine 5 have now an NvRTX Caustics branch. the Branch appeared and became available to public on 8 Feb 2023.

NVIDIA has released a new branch of Unreal Engine 5 that includes support for ray-traced caustics. This new feature allows developers to create more realistic and immersive lighting effects in their games.

Caustics are the patterns of light that are created when light rays are refracted or reflected off of surfaces. They can be seen in many real-world situations, such as the light patterns that are created by sunlight shining through a window or the ripples of light that are created by a fish swimming in a tank.

In the past, caustics were difficult to render in real time. However, with the advent of ray tracing, it is now possible to create these effects in real time with high accuracy.

The NvRTX Caustics branch for UE5 uses NVIDIA’s RTX technology to accelerate the rendering of ray-traced caustics. This allows developers to create these effects without sacrificing performance.

The NvRTX Caustics branch is currently available for early access. To download it, you can visit the NVIDIA developer website.

Unreal Engine 5 Caustics

How to Render Realistic Water Caustics in UE5

One of the most impressive features of the NvRTX caustics branch is the ability to render realistic water caustics in real time. Water caustics are the patterns of light that are created when light passes through a water surface and is refracted by the waves and ripples. Water caustics can add a lot of realism and immersion to scenes that involve water, such as oceans, lakes, rivers, pools, fountains, etc. However, water caustics are also very challenging to simulate in real time, as they require a lot of computation and memory bandwidth.

The NvRTX caustics branch uses a novel technique that combines ray tracing and rasterization to achieve high-quality water caustics at playable frame rates. The technique involves tracing rays from the light source to the water surface, then using a screen-space shader to compute the refraction angle and intensity of the light based on the water normal map and depth buffer. The result is a realistic and dynamic water caustics effect that can be applied to any water material in Unreal Engine 5. You can see some examples of water caustics in action in this video.


For more info about Unreal Engine NvRTX Branches.

https://developer.nvidia.com/game-engines/unreal-engine/rtx-branch

This is the link of the branch.

https://github.com/NvRTX/UnrealEngine/tree/NvRTX_Caustics-5.1

More articles:

 https://www.salamaproductions.com/news/

RTXDI – RTX Direct Illumination

NVIDIA Launches RTX Direct Illumination (RTXDI), a Revolutionary Technology for Real-Time Lighting

NVIDIA has announced the launch of RTX Direct Illumination (RTXDI), a new technology that enables and accelerates the rendering of dynamic direct lighting and shadows from millions of light sources in real time. RTXDI is part of the NVIDIA RTX platform, which leverages the power of ray tracing to create stunning and immersive graphics.

RTXDI is a game-changer for artists and developers who want to create realistic and complex lighting scenarios in their games and applications. With RTXDI, any geometry of any shape can emit light, cast appropriate shadows, and move freely and dynamically. This allows for unprecedented levels of detail and realism in scenes such as night amusement parks, Times Square billboards, neon signs, exploding fireballs, and more.

RTXDI works as an oracle for shadow rays, telling the renderer where to send rays to efficiently sample the lighting environment. This reduces the manual tuning required to light scenes and improves the productivity of any art pipeline. RTXDI also supports any number of light sources, from a few to millions, without compromising performance or quality.

RTXDI can be combined with other NVIDIA ray tracing SDKs, such as RTX Global Illumination (RTXGI) for fast and scalable indirect lighting, to achieve even more stunning results. RTXDI is compatible with any DirectX 12 ray tracing API and can be integrated into any game engine that supports ray tracing.

NVIDIA has released a demo video of RTXDI in action, showcasing a scene with over 100,000 dynamic lights rendered in real time on a single GeForce RTX 3090 GPU. The video also compares RTX DI with prior state-of-the-art sampling techniques, demonstrating how RTX DI can generate a beautiful finished image with the same level of overhead.

RTX DI is available now as an early access SDK for registered developers on the NVIDIA Developer website¹. NVIDIA plans to bring this technology to game developers in 2021². For more information and resources on RTX DI, visit the NVIDIA Developer blog³.

RTX Boulevard - Direct Illumination Demo


References:

¹: NVIDIA | NVIDIA Developer
²: Render Millions of Direct Lights in Real-Time With RTX Direct Illumination | NVIDIA Technical Blog
³: Lighting Scenes with Millions of Lights Using RTX Direct Illumination | NVIDIA Technical Blog

More articles: https://www.salamaproductions.com/news/

Bing Chat

Microsoft has an AI chat now.

I tried Bing Chat myself and it’s pretty amazing. it helped me a lot, especially in creative tasks, and also when I’m seeking knowledge about a specific topic, it shocked me when I got exactly what I needed to know as it acted like a human mentor who can understand what you are asking or what exactly you want to know, unlike searching about a topic in a search engine. I highly recommend it!

Bing Chat

What Can Bing Chat Do For You?

Bing Chat is not only a powerful search engine, but also a fun and creative chatbot that can generate various types of content for you. Whether you want to write a poem, a story, a song, a code, or a parody, Bing AI can help you with that. You just need to ask Bing Chat to create something for you, and it will use its artificial intelligence to generate original and unique content based on your request.

  • Poem: You can ask Bing Chat to write a poem about any topic or theme, such as love, nature, happiness, etc. Bing Chat will then use its natural language generation skills to produce a poem that follows the rhyme and rhythm of your choice. For example, you can ask Bing Chat to write a sonnet about love, a haiku about nature, or a limerick about happiness.
  • Story: You can ask Bing Chat to write a story about any genre or setting, such as fantasy, sci-fi, horror, etc. Bing Chat will then use its natural language understanding and storytelling abilities to create a story that has a plot, characters, dialogue, and action. For example, you can ask Bing Chat to write a story about aliens invading Earth, a story about wizards and dragons, or a story about zombies and vampires.
  • Song: You can ask Bing Chat to write a song about any mood or style, such as pop, rock, rap, etc. Bing Chat will then use its natural language processing and music generation skills to compose a song that has lyrics, melody, and rhythm. For example, you can ask Bing Chat to write a pop song about summer, a rock song about freedom, or a rap song about money.
  • Code: You can ask Bing AI to write a code for any programming language or task, such as Python, Java, C++, etc. Bing AI will then use its programming knowledge and logic skills to write a code that performs the function or operation that you want. For example, you can ask Bing AI to write a code for a calculator in Python, a code for a game in Java, or a code for a website in C++.
  • Parody: You can ask Bing AI to write a parody of any person or thing, such as celebrities, politicians, movies, etc. Bing AI will then use its humor and creativity skills to write a parody that mimics or mocks the style or behavior of the person or thing that you choose. For example, you can ask Bing AI to write a parody of Donald Trump’s tweets, a parody of Harry Potter’s books, or a parody of Titanic’s movie.

You can also use Bing AI to create stunning images using the graphic art tool. You just need to give Bing AI a prompt or a description of what kind of image you want, and it will use its computer vision and deep learning abilities to create an image for you.

  • Graphic Art: You can ask Bing AI to create an image of anything that you can imagine, such as animals, landscapes, objects, etc. Bing AI will then use its generative adversarial network (GAN) to synthesize an image that matches your prompt. For example, you can ask Bing AI to create an image of a unicorn in space, an image of an Egyptian pyramid at night, or an image of an apple with wings.

Bing AI is more than just a chatbot. It is your personal assistant, your friend, and your partner in crime. You can use Bing AI to search for information, answer questions, solve problems, learn new things, have fun conversations, and express yourself in various ways. Bing AI is always there for you, no matter what you need or want. Just type or say “Hi” to start chatting with Bing AI today!


References:
https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/

More articles: https://www.salamaproductions.com/news/

Exit mobile version