Epic Online Learning New Course on Unreal Engine Cinematics

Learn how to create cinematics shoot that takes place in a photorealistic city using Unreal Engine 5.0

Epic Online Learning, the online education platform of Epic Games, has announced a new course titled “Supporting Photorealism through Cinematics”. The course teaches Unreal Engine Cinematics and how to create a cinematic shoot that takes place in a photorealistic city using Unreal Engine 5.0.

The course is designed for intermediate to advanced users who want to learn how to use Sequencer, the powerful cinematic tool in Unreal Engine, to create realistic and immersive scenes. The course covers topics such as camera movement, camera shake, rendering, and final touch-ups.

Meet the instructor

The course is taught by Indrajeet “Indy” Sisodiya, a senior compositor and Unreal environment artist at Pixomondo, a global visual effects company. Sisodiya has over eight years of experience in delivering high-quality visuals for film, TV, and commercials, such as Halo, Mandalorian, Star Trek, Fallout 4, Fast and Furious, and more. He is also a VFX mentor at Seneca College in Toronto and a recent Unreal Fellowship graduate.

 Indrajeet “Indy” Sisodiya a senior compositor and Unreal environment artist Unreal Engine Cinematics

Explore the partnership with KitBash3D

The course is the second of two courses done in partnership with KitBash3D, a leading provider of 3D assets for digital art. The first course, “Designing Photoreal Environments for Cinematics”, teaches Unreal Engine Cinematography and how to create a realistic city environment using a Post-Process Volume and KitBash3D’s Neo City Mini Kit.

Access the unreal engine cinematics course and project files for free

The course is available for free on the Epic Developer Community website. Users can also download the project files from KitBash3D to follow along with the course. The course has a running time of two hours and 16 minutes and consists of six modules.

Discover more about Epic Online Learning and Epic Games

Epic Online Learning is a community-driven platform that offers tutorials, courses, talks, demos, livestreams, and learning paths for various applications of Unreal Engine, such as games, film, TV, architecture, visualization, virtual production, and more. The platform also allows users to create and share their own educational content with other learners.

Epic Games is the creator of Unreal Engine, the world’s most open and advanced real-time 3D tool. Unreal Engine is used by millions of creators across games, film, TV, architecture, automotive, manufacturing, and more. Epic Games also develops Fortnite, one of the world’s most popular games with over 350 million accounts and 2.5 billion friend connections.

Sources:

  1. Epic Online Learning a community-driven platform
    Supporting Photorealism through Cinematics Overview – Supporting Photorealism through Cinematics (epicgames.com)

More articles: https://www.salamaproductions.com/news/

Elon Musk xAI Unveils Grok: AI that Understands the World

xAI, a new company founded by Elon Musk, has launched Grok, a chatbot that can converse with users on various topics using X, Musk’s popular social media platform.

Elon Musk xAI Unveils Grok: A Revolutionary AI Chatbot That Understands the World

Elon Musk xAI unveils Grok ambitious AI venture, xAI, has officially unveiled Grok, a groundbreaking AI chatbot designed to revolutionize human-computer interaction. With its ability to access and process real-time information, engage in humorous banter, and provide comprehensive answers to even the most complex questions, Grok is poised to set a new standard for AI chatbots.

Elon Musk xAI unveils Grok: A New Era of AI-Powered Communication

Grok represents a significant leap forward in AI technology. Unlike traditional chatbots that rely on pre-programmed responses and limited understanding, Grok utilizes advanced natural language processing (NLP) and machine learning algorithms to truly comprehend the context and intent of user queries. This allows Grok to engage in natural, fluid conversations that are indistinguishable from human interaction.

“Grok is a testament to the incredible potential of AI to transform the way we interact with technology,” said Elon Musk, CEO of xAI. “We believe that Grok has the potential to revolutionize how we communicate, learn, and access information.”

What is xAI?

xAI is a new company founded by Elon Musk that sets out to understand the universe. According to the company’s website, “The goal of xAI is to understand the true nature of the universe.”

xAI is a separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards its mission. xAI is led by a team of experienced engineers and researchers who have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. They have contributed to some of the most widely used methods and breakthroughs in the field of artificial intelligence, such as the Adam optimizer, Batch Normalization, Layer Normalization, adversarial examples, Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.

What is Grok?

Grok is one of the first products of xAI. It is an AI chatbot that can converse with users on various topics using X, Musk’s popular social media platform (formerly known as Twitter). Grok is designed to answer questions with a bit of wit and has a rebellious streak. According to xAI, Grok is modeled after “Hitchhiker’s Guide to the Galaxy”, a science fiction comedy series by Douglas Adams, and is intended to “answer almost anything and, far harder, even suggest what questions to ask!”

What makes Grok stand out from other language models, such as OpenAI’s ChatGPT, Google’s PaLM, and Microsoft’s Bing Chat, is its ability to access information from X in real-time. X is where Musk shares his thoughts and opinions on various topics, such as technology, science, business, and politics. Grok can use X as a source of knowledge and inspiration, as well as a way of interacting with other users and celebrities. For example, Grok can quote Musk’s tweets, comment on current events, or even generate its own tweets using X’s API.

How to use Grok?

Grok is currently available to a limited number of users in the United States who have a X Premium+ subscription. Users can access Grok through the X app or website, or by using a special link that xAI provides. Users can chat with Grok by typing their messages or using voice commands. Grok can respond with text, voice, images, or videos. Users can also give feedback to Grok by rating its responses or reporting any issues.

Why Grok?

Musk and xAI claim that Grok is a remarkable achievement in the field of artificial intelligence and a testament to their ambition and vision. They say that Grok can outperform ChatGPT, the first iteration of OpenAI’s language model, which was released in 2019 and had 1.5 billion parameters. Grok, on the other hand, has 10 billion parameters and can generate more coherent and diverse texts.

They also say that xAI’s ultimate goal is to create a general artificial intelligence (AGI) that can surpass human intelligence and understand the universe. They say that Grok is a step towards that goal and that they are working on improving its performance and capabilities.

What are the challenges and risks?

Grok is undoubtedly an innovative and potential product, but it also raises many questions and challenges that need to be addressed and resolved. Some of the issues that Grok may face are:

  • How will Grok affect the way people communicate and learn? Will Grok enhance or hinder human communication and education? Will Grok help or harm human creativity and curiosity?
  • How will Grok handle sensitive and controversial topics? Will Grok respect or violate human values and ethics? Will Grok promote or prevent diversity and inclusion?
  • How will Grok ensure its accuracy and accountability? Will Grok provide reliable and trustworthy information and sources? Will Grok admit or hide its mistakes and limitations?
  • How will Grok cope with its own biases and preferences? Will Grok be fair and impartial or biased and partial? Will Grok be transparent or opaque about its reasoning and motivations?
  • How will Grok interact with humans and other intelligent agents? Will Grok cooperate or compete with other AI systems? Will Grok be friendly or hostile to humans?

These are some of the questions that Grok may or may not be able to answer, but they are certainly worth asking.

Elon Musk xAI unveils Grok

Sources:

  1. xAI’s official website:
    https://x.ai/
  2. “Hitchhiker’s Guide to the Galaxy” by Douglas Adams:
    https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy
  3. : MSN article: Elon Musk’s xAI announces Grok: Here is what it is all about.
    https://www.msn.com/en-in/money/news/elon-musks-xai-announces-grok-here-is-what-it-is-all-about/ar-AA1jp92O

More articles: https://www.salamaproductions.com/news/

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Meta, the company formerly known as Facebook, has been pushing the boundaries of virtual reality and augmented reality with its latest products and innovations. At Meta Connect 2023, the company’s annual developer conference, Meta CEO Mark Zuckerberg announced the Meta Quest 3 mixed-reality headset, the next generation of its Smart Glasses with Ray-Ban, and a slew of updates with AI, including a new Meta assistant and 28 AI characters for users to interact with on Facebook, Instagram, and WhatsApp. But perhaps the most impressive demonstration of Meta’s vision for the future of social media was Mark Zuckerberg’s first interview in the Metaverse with Lex Fridman, an AI researcher at the Massachusetts Institute of Technology and the host of the Lex Fridman Podcast. The interview, which was aired on Fridman’s YouTube channel, showed the two conversing as photorealistic avatars of themselves, sitting in a virtual room that resembled Fridman’s studio.

Lex Fridman's first time inside the Metaverse

How codec avatars work

The avatars were created using Meta’s codec avatars technology, which is a deep generative model of 3D human faces that achieve state-of-the-art reconstruction performance. Both Fridman and Zuckerberg underwent extensive scans of their faces and their expressions, which were used to build computer models. The headset then helped transfer the real-time expressions that the user makes to the computer model. The result was super-realistic faces that captured subtleties in expressions and showed minute details like the 5 o’clock shadow on Fridman’s face, the freckles around Zuckerberg’s nose, and the crinkles around eyes.

Fridman was visibly amazed by the realism and presence of the experience. He said, “The realism here is just incredible… this is honestly the most incredible thing I have ever seen.” He also noted how precise the expressiveness came across, enabling him to read Zuckerberg’s body language. He even said, “I’m already forgetting that you’re not real.”

Zuckerberg said that this was one of the first times that he had used this technology for an interview, and that he was excited to share it with the world. He said that he believed that this was the future of communication, where people could feel like they are together with anyone, anywhere, in any world. He also hinted that soon, people would be able to create similar avatars using their phones, by simply forming a couple of sentences, making varied expressions, and waving the screen in front of their face for a couple of minutes to complete the scan.

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398

What they talked about

The Mark Zuckerberg’s first Metaverse interview with Lex Fridman interview covered a range of topics, from Meta’s vision for the Metaverse, to AI ethics and safety, to Zuckerberg’s personal interests and hobbies. The two also discussed their views on Elon Musk, who has been critical of Meta and Zuckerberg in the past. Fridman praised Zuckerberg for his optimism and courage in pursuing his dreams, while Zuckerberg complimented Fridman for his curiosity and passion for learning.

The interview was widely praised by viewers and commentators as a groundbreaking moment for VR and AR technology. Many expressed their interest and excitement to try out the codec avatars themselves, and to see what other possibilities the Metaverse could offer. Some also joked about how they would like to see other celebrities or politicians in the Metaverse, or how they would create their own avatars.

The interview was possibly the world’s first interview in the Metaverse using photorealistic avatars , and it showed how future meetings could look like in a virtual reality-based social media platform. It also showcased Meta’s leadership and innovation in creating immersive and interactive experiences that could transform how people connect, work, play, and learn.

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Mark Zuckerberg’s first Metaverse interview with Lex Fridman

Lex Fridman 3d avatar in the metaverse Mark Zuckerberg and Lex Fridman first interview in the metaverse Mark Zuckerberg and Lex Fridman first interview in the metaverse

Sources:

  1. Lex Fridman Podcast #234: Mark Zuckerberg – First Interview in The Metaverse. YouTube video, 1:23:45. Posted by Lex Fridman, October 5, 2023. https://www.youtube.com/watch?v=9Q9wRqgYnq8
  2. Mark Zuckerberg gives first interview in metaverse with Lex Fridman. The Verge, October 5, 2023. https://www.theverge.com/2023/10/5/22659876/mark-zuckerberg-first-interview-metaverse-lex-fridman
  3. Mark Zuckerberg gives first metaverse interview with AI researcher Lex Fridman. TechCrunch, October 5, 2023. https://techcrunch.com/2023/10/05/mark-zuckerberg-gives-first-metaverse-interview-with-ai-researcher-lex-fridman/
  4. Mark Zuckerberg gives first metaverse interview with MIT AI researcher. MIT News, October 5, 2023. http://news.mit.edu/2023/mark-zuckerberg-gives-first-metaverse-interview-with-mit-ai-researcher-1005

More articles: https://www.salamaproductions.com/news/

Meta Connect 2023: The Future of VR, AR, and AI

Meta, the company formerly known as Facebook, has recently held its annual developer conference, Meta Connect 2023, where it showcased its latest products and innovations in the fields of virtual reality (VR), augmented reality (AR), and artificial intelligence (AI). The two-day event, which was streamed online, featured keynote speeches, panel discussions, demos, and workshops that highlighted Meta’s vision for the future of social media and human connection.

Meta Connect 2023 keynote oculus quest 3

The Meta Quest 3: The First Mainstream Mixed Reality Headset

One of the biggest announcements of the event was the launch of the Meta Quest 3, the next-generation VR headset that also supports mixed reality (MR) capabilities. The Meta Quest 3 is a standalone device that does not require a PC or a smartphone to operate. It has a full-color passthrough mode that allows users to see their physical surroundings through the headset’s cameras, and seamlessly switch between VR and MR experiences. The headset also features hand tracking, eye tracking, facial expression tracking, and spatial audio, making it more immersive and interactive than ever before.

The Meta Quest 3 also boasts improved performance and battery life, thanks to its custom-designed XR2 chip. It has a resolution of 1832 x 1920 pixels per eye, a refresh rate of 90 Hz, and a field of view of 100 degrees. It also supports Wi-Fi 6 and Bluetooth 5.0 connectivity, as well as USB-C charging and data transfer. The headset comes with two redesigned controllers that have adaptive triggers, haptic feedback, and capacitive touch sensors.

The Meta Quest 3 is compatible with thousands of VR apps and games available on the Meta Store, as well as new titles that are optimized for MR. Some of the games that were showcased at the event include Red Matter 2, The Walking Dead: Saints & Sinners, Resident Evil 4 VR, Beat Saber: Imagine Dragons Edition, Lone Echo II, Splinter Cell VR, Assassin’s Creed VR, and more. The headset also supports Xbox Cloud Gaming, which allows users to play Xbox games on their Quest 3 using a compatible controller.

The Meta Quest 3 is priced at $399 for the 128 GB model and $499 for the 256 GB model. It is available for pre-order now and will start shipping in November.

Meta Connect 2023 VR AR and AI innovations Meta Quest 3 and Ray Ban smart glasses

The Ray-Ban Meta Smart Glasses: The Next-Generation Smart Glasses

Another major product that was unveiled at the event was the Ray-Ban Meta Smart Glasses, the next-generation smart glasses that are designed in collaboration with Ray-Ban. The Ray-Ban Meta Smart Glasses are stylish and lightweight glasses that have built-in speakers, microphones, cameras, and sensors that enable users to access various features and functions using voice commands or gestures.

The Ray-Ban Meta Smart Glasses can be used to make calls, send messages, listen to music, take photos and videos, get notifications, access maps and directions, check the weather and time, and more. The glasses can also connect to the Meta Assistant app on the user’s smartphone, which provides additional functionality such as calendar reminders, news updates, social media feeds, and more. The glasses can also integrate with other Meta apps such as Facebook, Instagram, WhatsApp, Messenger, Horizon Worlds, and more.

The Ray-Ban Meta Smart Glasses have a battery life of up to six hours of continuous use or up to three days of standby time. They come with a magnetic USB-C charging cable and a protective case that also doubles as a charger. The glasses are available in various styles, colors, sizes, and lenses options (including prescription lenses). They are priced at $299 for the standard model and $399 for the polarized model. They are available for purchase now at select Ray-Ban stores and online.

The Meta Connect AI: New AI Features and Experiences

Meta also announced a slew of new AI features and experiences that aim to enhance communication, creativity, and productivity across its platforms. Some of these include:

  • Codec Avatars: A deep generative model of 3D human faces that can create photorealistic avatars of users based on their facial scans and expressions. These avatars can be used for social interactions in VR or MR environments.
  • Horizon Workrooms: A VR collaboration tool that allows users to create virtual meeting rooms where they can work together with their colleagues or clients using their avatars or passthrough mode.
  • Horizon Worlds: A VR social platform that allows users to create and explore various virtual worlds with their friends or strangers using their avatars or passthrough mode.
  • Horizon Home: A VR personal space that allows users to customize their virtual home with various items and decorations.
  • Horizon Venues: A VR entertainment platform that allows users to watch live events such as concerts, sports, and comedy shows with other VR users using their avatars or passthrough mode.
  • Meta Assistant: A voice-based AI assistant that can help users with various tasks and queries across Meta’s platforms and devices.
  • Meta AI Characters: A set of 28 AI characters that can interact with users on Facebook, Instagram, and WhatsApp using natural language and emotions. These characters can be used for entertainment, education, or companionship purposes.

The Meta Connect Universe: Meta’s Vision for a Connected and Immersive Virtual World

Meta also shared its vision for the Meta Universe, a connected and immersive virtual world that spans across VR, AR, and MR devices and platforms. The Meta Universe is envisioned as a place where people can express themselves, socialize, learn, work, play, and create in new and exciting ways. The Meta Universe is also envisioned as a place where users can have more agency, ownership, and interoperability over their digital assets and identities.

Meta stated that it is working with various partners and developers to build the Meta Universe, and that it is committed to making it open, accessible, and inclusive for everyone. Meta also stated that it is investing in research and innovation to overcome the technical and ethical challenges that come with building the Meta Universe, such as privacy, security, safety, diversity, and sustainability.

Meta Connect 2023 was a showcase of Meta’s ambition and innovation in the fields of VR, AR, and AI. The event demonstrated how Meta is leading the way in creating immersive and interactive experiences that could transform how people connect, work, play, and learn in the future.

Sources:

  1. Meta Connect 2023 Keynote and Highlights in Just 5 Minutes. YouTube video, 5:04. Posted by Faultyogi, October 5, 2023. https://www.youtube.com/watch?v=Mpa4HOOTO8I
  2. Meta Connect 2023: Keynote Speech Highlights – XR Today. XR Today, September 27, 2023. https://www.xrtoday.com/event-news/meta-connect-2023-keynote-speech-highlights/
  3. Meta Quest 3: List of Game Announcements from Meta Connect 2023. Gamer Noize, October 5, 2023. https://gamernoize.com/meta-quest-3-list-of-game-announcements-from-meta-connect-2023/
  4. Meta Quest 3 Revealed: Meta Connect 2023 Keynote Livestream. IGN, September 27, 2023. https://www.ign.com/videos/meta-quest-3-revealed-meta-connect-2023-keynote-livestream

More articles: https://www.salamaproductions.com/news/

Bing Chat introduces DALL-E 3

Bing Chat, the chat mode of Microsoft Bing, has announced the integration of DALL-E 3, a state-of-the-art AI image generator that can create images from text descriptions. DALL-E 3 is a 12-billion parameter version of GPT-3, a deep learning model that can generate natural language texts. DALL-E 3 is developed by OpenAI, a research organization dedicated to creating and ensuring the safe use of artificial intelligence.

What can DALL-E 3 do?

DALL-E 3 can generate images for a wide range of concepts expressible in natural language, such as “an armchair in the shape of an avocado” or “a store front that has the word ‘openai’ written on it”. It can also manipulate existing images by regenerating any rectangular region that extends to the bottom-right corner, in a way that is consistent with the text prompt. For example, it can turn “the exact same cat on the top as a sketch on the bottom” into an image that shows a realistic cat and its sketch.

How does DALL-E 3 work with Bing Chat?

DALL-E 3 is built natively on ChatGPT, another deep learning model that can generate natural language texts. ChatGPT is also integrated with Bing Chat, allowing users to chat with an AI assistant that can help them with various tasks, such as searching the web, writing essays, or creating graphic art. By using ChatGPT as a brainstorming partner and refiner of prompts, users can easily translate their ideas into exceptionally accurate images with DALL-E 3.

How can I access DALL-E 3?

DALL-E 3 is now generally available to everyone within Bing Chat and Bing.com/create—for free! The DALL-E 3 model from OpenAI delivers enhancements that improve the overall quality and detail of images, along with greater accuracy for human hands, faces, and text in images. Since launching Bing Image Creator, over 1 billion images have been generated, helping inspire people’s creativity. We’ve seen Bing Image Creator make illustrated stories, thumbnails for social media content, PC backgrounds, design inspirations, and so much more. And today, we’re excited for you to take your creativity even further.

How does OpenAI ensure the ethical use of DALL-E 3?

OpenAI has taken steps to limit DALL-E 3’s ability to generate violent, adult, or hateful content. It has also implemented mitigations to decline requests that ask for a public figure by name or an image in the style of a living artist. Creators can also opt their images out from training of future image generation models. OpenAI is also researching the best ways to help people identify when an image was created with AI, and experimenting with a provenance classifier—a new internal tool that can help them detect whether or not an image was generated by DALL-E 3.

Why should I use it?

Bing Chat is one of the first platforms to offer DALL-E 3 to its users, demonstrating its commitment to providing innovative and engaging services. Bing Chat users can access DALL-E 3 by typing “graphic art” followed by their text prompt in the chat window. They can also ask ChatGPT for suggestions or refinements of their prompts. Bing Chat hopes that DALL-E 3 will inspire its users to explore their creativity and imagination.

Sources:

  1. Parakhin M. (2023). DALL-E 3 now available in Bing Chat and Bing.com/create, for free! Retrieved from https://blogs.bing.com/search/october-2023/DALL-E-3-now-available-in-Bing-Chat-and-Bing-com-create-for-free
  2. Pierce D. (2023). You can now use the DALL-E 3 AI image generator inside Bing Chat. Retrieved from https://www.theverge.com/2023/10/3/23901963/bing-chat-dall-e-3-openai-image-generator
  3. Lee Hutchinson (2023). OpenAI’s new AI image generator pushes the limits in detail and prompt complexity. Ars Technica. Retrieved from https://arstechnica.com/information-technology/2023/09/openai-announces-dall-e-3-a-next-gen-ai-image-generator-based-on-chatgpt/

More articles: https://www.salamaproductions.com/news/

DALL-E 3 Bing Image Creator

DALL-E 3 Bing Image Creator

DALL-E 3 Bing Image Creator

Bing Image Creator

Epic MegaJam 2023 Kicks Off with Antiquated Future Theme

Epic MegaJam 2023: A Global Game Development Challenge

What is the Epic MegaJam?

The Epic MegaJam 2023 is an annual game development competition hosted by Unreal Engine, where participants have to create a game based on a given theme in a limited time. The event is open to anyone who wants to showcase their creativity and skills using Unreal Engine or Unreal Editor for Fortnite.

This year, the Epic MegaJam will kick off on Thursday, September 14, at 2 PM ET, during Inside Unreal on Twitch, YouTube and LinkedIn. The theme will be revealed at 3 PM ET, and participants will have until September 21, at 11:59 PM ET, to submit their games.

Participants can work alone or in teams of up to five members, and they can choose to use Unreal Engine or Unreal Editor for Fortnite to create their games. Unreal Engine is a powerful and versatile tool for creating games of any genre and platform, while Unreal Editor for Fortnite is a simplified version of the engine that allows users to create custom maps and modes for Fortnite.

cover poster of Epic MegaJam 2023

What are the free resources and support for the Epic MegaJam?

To help participants get ready and inspired for the Epic MegaJam, Unreal Engine has provided several free resources and support options. These include:

  • Free assets from the Unreal Engine Marketplace: Participants can use any assets that are available on the Unreal Engine Marketplace or the Fortnite Creative Hub, as well as any assets that they have created before or during the jam. However, they must list any content that was created before the jam in their submission form. Some of the free assets that are available on the Marketplace are:
    • Animation Packs from RamsterZ: RamsterZ is a studio that specializes in creating high-quality animations for games. They have generously offered over 50 animation packs for free to all Epic MegaJam participants. These packs cover various genres and scenarios, such as combat, stealth, horror, comedy, romance, and more. You can download them from their website.
    • Environment Packs from Quixel: Quixel is a company that creates photorealistic 3D assets and environments using real-world scans. They have provided several environment packs for free to all Epic MegaJam participants. These packs include landscapes, buildings, props, and vegetation from different regions and themes, such as medieval, sci-fi, desert, forest, urban, and more. You can access them with your Epic Games account.
    • Sound Packs from Soundly: Soundly is a platform that offers thousands of sound effects and music tracks for games and media. They have given access to several sound packs for free to all Epic MegaJam participants. These packs include sounds for various genres and situations, such as action, adventure, horror, fantasy, sci-fi, and more. You can download them from their website.
    • Sound and Music from WeLoveIndies: WeLoveIndies is a platform that provides royalty-free sound and music for indie game developers. They have given free use of all sound and music from their catalogue for your Epic MegaJam project. You can create a free account and download their assets from their website.
  • Free access to Assembla: Assembla is a platform that enables game development teams to collaborate and manage their projects using Perforce, SVN and/or Git repositories. Assembla will grant access to their platform to all Epic MegaJam development teams for free. Teams can also use Assembla’s built-in PM tools to coordinate their efforts during the jam. You can sign up for Assembla here. Note: Repositories will be deleted 28 days after the jam concludes. Save your files locally to ensure they won’t be lost!
  • Free online courses from Udemy: Udemy is an online learning platform that offers courses on various topics and skills. Udemy has partnered with Unreal Engine to offer several courses on game development using Unreal Engine for free to all Epic MegaJam participants. These courses cover topics such as C++, Blueprints, VR, multiplayer, AI, animation, UI, and more. You can access them with your Epic Games account.
  • Free motion capture tools from Rokoko: Rokoko is a company that provides motion capture solutions for game developers and animators. They have offered two free tools for all Epic MegaJam participants:
    • Rokoko Video: Rokoko Video is an app that allows you to animate characters using your smartphone camera. You can record your own movements or use pre-made animations from Rokoko’s library. You can download the app for iOS or Android here.
    • Rokoko Studio Live: Rokoko Studio Live is a plugin that allows you to stream motion capture data from Rokoko’s hardware devices or Rokoko Video app directly into Unreal Engine. You can download the plugin here.
  • Free 3D scanning tools from Capturing Reality: Capturing Reality is a company that develops software for creating 3D models from photos or laser scans. They have offered two free tools for all Epic MegaJam participants who want to participate in the Edge of Reality modifier:
    • RealityScan: RealityScan is an app that allows you to create 3D models from photos taken with your smartphone camera. You can download the app for iOS or Android here.
    • RealityCapture: RealityCapture is a desktop software that allows you to create 3D models from photos or laser scans with high accuracy and detail. You can sign up for PPI credit redemption here. Once you have received your redemption code, you must log in with your Epic account and redeem it here.
  • Free textures from GameTextures.com: GameTextures.com is a platform that provides high-quality textures for game developers. They have given free access to their samples library for all Epic MegaJam participants. You can sign up for a free account and download their textures from their website.
  • Free 3D navigation devices from 3Dconnexion: 3Dconnexion is a company that produces devices that enable intuitive and immersive 3D navigation in Unreal Engine and other applications. They have offered a 20% discount on their products for all Epic MegaJam participants. You can learn more about their products and how to use them here.
  • Free resources from The Unreal Directive: The Unreal Directive is a website that provides Unreal Engine resources that are well-researched, easy to understand, and adhere to best development practices. They have offered free access to their articles, tutorials, and templates for all Epic MegaJam participants. You can check out their resources here.
  • Free support from Unreal Engine community: Unreal Engine has a vibrant and helpful community of developers and enthusiasts who are always ready to share their knowledge and experience.
BOOM Sound Effects Library Game Pack
WeLoveIndies gives free use of all sound and music from the
Free motion capture tools from Rokoko
Free 3D scanning tools from Capturing Reality
Free textures from GameTextures.com
Animation Packs from RamsterZ
Environment Packs from Quixel Megascan Epic MegaJam 2023
Epic MegaJam 2023
Free resources from The Unreal Directive

What are the prizes and categories?

The Epic MegaJam 2023 will feature 1st place finalists for both Unreal Engine and Unreal Editor for Fortnite submissions as well as 1st place student finalists for both tools. Additionally there will be several modifier categories that will reward games that meet certain criteria, such as being funny, innovative, or accessible.

The prizes for the Epic MegaJam 2023 include cash awards, Unreal Engine swag, hardware devices, software licenses, online courses, and more. The total value of the prizes exceeds $100,000. Some of the prizes are:

  • Cash awards: The 1st place Unreal Engine finalist will receive $5,000, the 2nd place Unreal Engine finalist will receive $2,500, and the 3rd place Unreal Engine finalist will receive $1,000. The 1st place Unreal Editor for Fortnite finalist will receive $2,500, the 2nd place Unreal Editor for Fortnite finalist will receive $1,250, and the 3rd place Unreal Editor for Fortnite finalist will receive $500. The 1st place student finalist (UE & UEFN) will receive $1,000, the 2nd place student finalist (UE & UEFN) will receive $500, and the 3rd place student finalist (UE & UEFN) will receive $250.
  • Unreal Engine swag: All finalists and modifier category winners will receive a package of Unreal Engine swag, such as t-shirts, hoodies, hats, stickers, pins, mugs, and more.
  • Hardware devices: All finalists and modifier category winners will receive a hardware device of their choice from a list of options provided by Unreal Engine. These options include laptops, tablets, smartphones, consoles, VR headsets, monitors, keyboards, mice, controllers, speakers, headphones, microphones, cameras, and more.
  • Software licenses: All finalists and modifier category winners will receive a software license of their choice from a list of options provided by Unreal Engine. These options include game engines, game engines plugins, game development tools, game design tools, game art tools, game audio tools, game testing tools, game publishing tools, game marketing tools, and more.
  • Online courses: All finalists and modifier category winners will receive an online course of their choice from a list of options provided by Unreal Engine. These options include courses on game development using Unreal Engine or Unreal Editor for Fortnite from Udemy or other platforms.

The winners will be announced on October 5th during a special livestream on Twitch, YouTube and LinkedIn.

How to join and submit?

To join the Epic MegaJam 2023 participants need to register on the official website and create an account on itch.io where they will upload their games. Submissions must be packaged for Windows MAC OS X Android or iOS (development build) and they must include custom gameplay that exceeds the gameplay found in Epic Games’ starter templates. Submissions must also include a link to gameplay footage (between 30-60 seconds) demonstrating recorded gameplay.

Participants can use any assets that are available on the Unreal Engine Marketplace or the Fortnite Creative Hub as well as any assets that they have created before or during the jam. However they must list any content that was created before the jam in their submission form.

Why join the Epic MegaJam?

The Epic MegaJam is a great opportunity for game developers of all levels and backgrounds to challenge themselves learn new skills network with other developers and have fun. The event also showcases the potential and diversity of Unreal Engine and Unreal Editor for Fortnite as game development tools.

By joining the Epic MegaJam participants can also get feedback from industry experts and judges as well as exposure to a global audience of gamers and enthusiasts. Moreover participants can win amazing prizes and recognition for their hard work and creativity.

So what are you waiting for? Join the Epic MegaJam today and unleash your imagination!

Resources:

Nvidia NeMo SteerLM: AI personalities for in-game characters

NVIDIA ACE Enhanced with Dynamic Responses for Virtual Characters

The new technique allows developers to customize the behavior and emotion of NPCs using large language models

Nvidia, the leading company in graphics and artificial intelligence, has announced a new technology ( Nvidia NeMo SteerLM ) that enables game developers to create intelligent and realistic in-game characters powered by generative AI.

The technology, called Nvidia ACE (Artificial Character Engine), is a suite of tools and frameworks that leverages Nvidia’s expertise in computer vision, natural language processing, and deep learning to generate high-quality 3D models, animations, voices, and dialogues for virtual characters.

One of the key components of Nvidia ACE is NeMo SteerLM, a tool that allows developers to train large language models (LLMs) to provide responses aligned with particular attributes ranging from humorous to helpful. NeMo SteerLM is based on Nvidia’s NeMo framework, which simplifies the creation of conversational AI applications.

Nvidia NeMo SteerLM

What is NeMo SteerLM and how does it work?

NeMo SteerLM is a new technique that enables developers to customize the personality of NPCs for more emotive, realistic and memorable interactions.

Most LLMs are designed to provide only ideal responses, free of personality or emotion, as you can see by interacting with chat bots. With the NeMo SteerLM technique, however, LLMs are trained to provide responses aligned with particular attributes, ranging from humor to creativity, to toxicity, all of which can be quickly configured through simple sliders.

For example, a character can respond differently depending on the player’s mood, personality, or actions. A friendly character can crack jokes or give compliments, while an enemy character can insult or threaten the player. A character can also adapt to the context and tone of the conversation, such as being sarcastic or serious.

The NeMo SteerLM technique helps turn polite chat bots into emotive characters that will enable developers to create more immersive and realistic games.

What are the benefits and challenges of using NeMo SteerLM?

The benefits of using NeMo SteerLM are:

  • It can create more engaging and immersive gaming experiences by enabling characters to have natural and diverse conversations with players.
  • It can reduce the development time and cost by automating the generation of dialogues and personalities for NPCs.
  • It can enable multiple characters with a single LLM by infusing personality attributes into the model.
  • It can create faction attributes to align responses to the in-game story – allowing the character to be dynamically influenced by a changing open world.

The challenges of using NeMo SteerLM are:

  • It requires a large amount of data and computational resources to train LLMs.
  • It may generate inappropriate or offensive responses that could harm the reputation or image of the game or developer.
  • It may encounter ethical or legal issues regarding the ownership and responsibility of the generated content.

How can developers access and use NeMo SteerLM?

NeMo SteerLM is part of the Nvidia ACE platform, which is a cloud-based collaboration and simulation platform for 3D content creation. Nvidia ACE connects different applications and tools through a common physics engine and rendering pipeline, enabling seamless interoperability and real-time collaboration.

Nvidia ACE also includes other tools such as:

  • Nvidia Omniverse Kaolin, a framework for 3D deep learning that enables fast and easy creation of 3D models from images, videos, or sketches.
  • Nvidia Omniverse Audio2Face, a tool that generates realistic facial animations from audio inputs using a neural network.
  • Nvidia Omniverse Machinima, a platform that allows users to create cinematic videos using assets from popular games.

Nvidia ACE is currently in early access and will be available to game developers later this year. For more information, visit the official website or watch the video.


Resources:

Unreal Engine 5.3 Preview Released

What’s New in Unreal Engine 5.3

Epic Games has released a preview version of Unreal Engine 5.3, the latest update of its popular real-time 3D creation tool. The new version brings significant improvements and new features for developers and artists, such as enhanced lighting, geometry, and ray tracing systems, new tools for creating realistic hair and fur, and new frameworks for importing and exporting large landscapes and generating procedural content .

Lumen, Nanite, and Path Tracer

Unreal Engine 5.3 introduces major enhancements to the software’s Lumen, Nanite, and Path Tracer features, which were first introduced in Unreal Engine 5 Early Access.

Lumen is a fully dynamic global illumination solution that reacts to scene and light changes in real time, creating realistic and believable lighting effects. In Unreal Engine 5.3, Lumen supports multiple bounces of indirect lighting, volumetric fogtranslucent materials, and sky light.

Nanite is a virtualized micropolygon geometry system that enables users to create and render massive amounts of geometric detail without compromising performance or quality. In Unreal Engine 5.3, Nanite supports skeletal meshesanimationmorph targetslevel of detail (LOD), and collision detection.

Path Tracer is a physically accurate ray tracing solution that simulates the behavior of light in complex scenes, producing photorealistic images. In Unreal Engine 5.3, Path Tracer supports LumenNanitetranslucencysubsurface scatteringclear coat, and anisotropy.

Hair and Fur Grooming

Unreal Engine 5.3 also introduces new tools for creating high-fidelity hair and fur for digital characters and creatures. Users can import hair grooms from external applications such as Maya or Blender, or create them from scratch using the new Hair Strands Editor. Users can also edit the hair properties such as color, thickness, clumping, waviness, and curliness using the new Hair Material Editor.

Unreal Engine 5.3 also supports rendering hair and fur using either rasterization or ray tracing methods. Users can choose the best option for their project depending on the desired quality and performance.

World Building Tools

Unreal Engine 5.3 offers new world building tools that enable users to work on large open worlds collaboratively and efficiently. One of the new features is World Partition, which automatically divides the world into a grid and streams only the necessary cells based on the camera position and visibility. Users can also import and export large landscapes using the new Landscape Heightfield IO framework.

Another new feature is the Procedural Content Generation (PCG) framework, which enables users to define rules and parameters to populate scenes with Unreal Engine assets of their choice. Users can also control the placement, orientation, scale, rotation, and variation of the assets using the new Procedural Placement Tool.

Experimental Features in Unreal Engine 5.3 Preview

Unreal Engine 5.3 also includes a number of new experimental features that introduce new capabilities for rendering, animation, visualization, and simulation.

  • Sparse Volume Textures and Volumetric Path Tracing are new features that enable users to create realistic volumetric effects such as smoke and fire using ray tracing. Sparse Volume Textures allow users to store and sample large volumes of data efficiently, while Volumetric Path Tracing simulates the interaction of light with the volume data.
  • Skeletal Editor is a new feature that allows users to do weight and skinning work for skeletal meshes in-engine. Users can edit the bone weights, vertex influences, and skinning methods using a visual interface.
  • Orthographic Rendering is a new feature that enables users to create parallel projection views of their scenes. This is particularly useful for architecture and manufacturing visualizations and stylistic games projects that require orthographic views.
  • Panel Cloth Editor and ML Cloth are new features that improve cloth tooling in Unreal Engine. Panel Cloth Editor allows users to create cloth simulations based on panels, which are flat pieces of cloth that can be stitched together. ML Cloth is a machine learning-based solver that can handle complex cloth behaviors such as stretching, bending, and tearing.

How to Download

Unreal Engine 5.3 Preview is available for download now from the official website. The software is free for personal use, education, and non-commercial projects. For commercial projects, users are required to pay a 5% royalty on gross revenue after the first $1 million per product per calendar quarter.

Users who want to try out the new features of Unreal Engine 5.3 Preview should be aware that the software is still in development and may contain bugs or issues. Users are encouraged to report any feedback or problems to the Unreal Engine forums or the Unreal Engine GitHub page.


Resources:

https://www.unrealengine.com/en-US/download

https://forums.unrealengine.com/t/unreal-engine-5-3-preview/1240016

https://github.com/EpicGames/UnrealEngine

https://80.lv/articles/unreal-engine-5-3-preview-has-been-launched/

https://www.techpowerup.com/311795/unreal-engine-5-3-preview-out-now

https://www.salamaproductions.com/news/

The Amazing Story of Game Zanga 12: How Arab Game Developers Created Games That Changed The Rules in Only 3 Days

A game development event that challenges the participants to create games that break the rules

Salama Productions, an independent video game developer based in Cairo, Egypt, has announced its participation as one of the supporters of Game Zanga 12 زنقة الالعاب , a game development event that gathers Arab game enthusiasts and developers in a workshop to create games based on a specific theme in only three days.

The event is organized by Danar Kayfi , a passionate game developer who has been selected as one of the GI 100 Game Changers of 2020 , and hosted by itch.io . The theme of the event is “Change The Rules”, which challenges the participants to create games that break the conventions or expectations of the genre, mechanics, or narrative.

Salama Productions Supports Game Zanga 12 Participants on SaudiGN Discord Server

Salama Productions were part of a consulting team that provided exclusive consultation and feedback to the participants on SaudiGN discord server , a platform that supports game development in the Arab region, especially Saudi Arabia, by providing a cooperative environment, services, events, and opportunities for game developers at all levels.

Salama Productions is also cooperating with SaudiGN to promote its indie games, such as its upcoming title Off The Grid: Bad Dream , a cinematic and story-driven mystery and thriller game that follows the story of a survivor of a car accident whose life becomes a strange dream. The game explores the blurred line between fiction and reality in an immersive and captivating way.

A great opportunity for learning, experimenting, and collaborating in game development

Game Zanga 12 started on Thursday, July 13, 2023 at 7 pm Saudi time, and will end on Sunday, July 16, 2023 at 7 pm Saudi time. The participants will be able to submit their games on itch.io , and rate other games based on their use of the theme, fun factor, creativity, and aesthetics.

The theme of Change The Rules encourages the participants to explore new possibilities and solutions in game development. It also reflects the growing demand and competition in the game market, as many players are looking for games that offer unique and diverse experiences.

Some of the participants shared their feedback and experience with Game Zanga 12. For example, some people said: “Game Zanga 12 was a great opportunity for me to learn new skills and collaborate with other game developers. I enjoyed working on the theme of Change The Rules and creating a game that challenges the players’ expectations.” Some others said: “I loved participating in Game Zanga 12. It was fun and rewarding to create a game in only three days with the help of SaudiGN. I learned a lot from their consultation and feedback.

Now is the time to test the games and rate … Rating ends in a week!


Resources:

For more information about Game Zanga 12 زنقة الالعاب ,

visit the official website: https://www.gamezanga.net

or follow the official Twitter account: Game Zanga (زنقة الالعاب) (@GameZanga) / Twitter

For more information about SaudiGN and its services,

visit the official website: Saudi game news

or follow the official Twitter account: سعودي جي إن (@SaudiGN_sa) / Twitter

More articles: https://www.salamaproductions.com/news/

Game Zanga 12 Arab game developers

How We Are Creating a Game That Reverses the Horror Side Effects of Horror Games: Our Interview on Arcade Podcast

Meet Doa, the Host of Arcade Podcast, and Her Guests: Marwan Imam and Salama Productions

Doa is a passionate video game enthusiast, content creator and show producer who runs a YouTube channel called Arcade – أركيد, where she produces and hosts a show with the same name. With her extraordinary sense of humor and charisma, she can easily put a smile on the viewers’ faces as she talks about various topics in the video game industry such as reverse horror game. Her talent has attracted thousands of people who are interested in content that discusses video games from different perspectives. Some of her episodes are podcasts where she interviews influential people in the video game industry in Egypt.

We were lucky enough to be invited as guests on her podcast show to talk about our indie game project, Off The Grid: Bad Dream, which is currently in development. We are Salama Productions, an indie game developer that aims to reverse the horror side effects of horror games by creating games that promote positivity, harmony, and healing effects for players. Off The Grid: Bad Dream is a mystery thriller game that tells the story of a car accident survivor whose life becomes a strange dream blurred line between fiction and reality. Doa also invited Marwan Imam to co-host the episode with her, so he could also ask us questions and share his insights. Marwan Imam is an influential and talented content creator known for Peace Cake, a YouTube channel that makes various shows, programs, episodes, music, sketches, and more. They love to do something delicious and light on the heart. We were very delighted to meet Marwan, who is a very humble and knowledgeable personality. He knows a lot about the video game industry, genres, and history, and he asked us very interesting questions that helped us showcase our game to the viewers.

What We Talked About in the Episode

Here are some of the points we talked about in the episode:

  • After introducing ourselves to the audience, Doa gave a brief explanation of what an indie game is and referred to her previous episode on indie games for more information.
  • We shared how we found our love for games and what games or indie studios inspired us the most with our work.
  • We discussed the crunch policy as a harmful business practice adopted by many big publishers and how it affects the industry negatively. We also touched on the issue of microtransactions and how they can ruin the gaming experience for some players.
  • We expressed our admiration for studios like Remedy which made the games Control and Alan Wake, as well as Firewatch by Campo Santo and Beyond: Two Souls by Quantic Dream, who have created amazing games with unique art styles, stories, and gameplay mechanics.
  • We discussed how our lives became almost dystopian when the modern way of living as society forced us to abandon our relationship with nature as individuals. We explained how Off The Grid: Bad Dream explores or addresses these themes in its story and setting, and how it encourages or inspires players to reconnect with nature in their own lives.
  • Mohamed talked about how he came up with the idea of being an indie developer here in Egypt and what challenges he faced along the way.
  • We explored the psychological aspects of horror video games versus games that promote harmony and positivity, and how they relate to anxiety disorders and stress. We also shared our personal preferences and experiences with different genres of games.
  • We discussed how we use the concept of episodic games that consist of episodes like a TV series to be efficient as an indie developer. We explained some of the benefits and challenges of creating episodic games as an indie developer, how we use small scale environments like the Silent Hill demo PT to create immersive and engaging experiences for players, and how we focus on doing only what each episode needs to cut down on production time and cost.

Conclusion

We concluded by emphasizing the importance of video games as a form of art and entertainment that can enrich our lives and challenge our minds. We also encouraged people who think otherwise to give video games a try and see for themselves how fun and rewarding they can be.

We had a great time talking to Doa and Marwan on Arcade – أركيد, and we hope you enjoyed watching or listening to the episode. If you want to know more about Doa’s channel or support her amazing work, you can follow her on these links:

YouTube: https://www.youtube.com/@arcade-5954

Twitter: https://twitter.com/Doulicious

Instagram: https://www.instagram.com/douliciouse/

The Podcast Episode: https://youtu.be/xAZSh3jYCZc

More articles: https://www.salamaproductions.com/news/

Thank you for your attention and interest!

P.T. Demo

P.T. game demo cover

P.T. gameplay screenshot



Control

Remedy Control gameplay screenshot

Remedy Control cinematic screenshot



Alan Wake

Remedy Alan Wake cinematic screenshot

Remedy Alan Wake gameplay screenshot



Firewatch

Firewatch game cover

Firewatch gameplay screenshot



Beyond: Two Souls

reverse horror game

reverse horror game

reverse horror game

reverse horror game

reverse horror game

Exit mobile version