Theory Flashcards
What are the basic principles of Unity architecture and how do they affect game development?
The basic principles of Unity architecture revolve around GameObjects, Components, Scenes, and Scripts:
GameObjects: These are the fundamental entities in Unity, representing characters, props, cameras, lights, etc.
Components: Components are attached to GameObjects to add functionality or behavior. Examples include Rigidbody for physics, Renderer for visuals, and Scripts for custom logic.
Scenes: Scenes are collections of GameObjects that make up a level or a portion of a game world. They help organize the game’s structure and content.
Scripts: Scripts are written in languages like C# and are attached to GameObjects as Components. They define the behavior and interactions of GameObjects in the game.
What Phyics mechanics have you used to create realistic object movement?
I’ve utilized rigid body dynamics, which simulate the motion and interactions of objects based on their mass, forces, and collisions. Additionally, I’ve employed techniques like raycasting for precise object detection and physics-based animation blending for smoother movement transitions.
What is the Rigidbody component?
The Rigidbody component in Unity is essential for simulating realistic physics in games. It’s attached to GameObjects to give them physical properties like mass, gravity, and forces. This allows objects to respond naturally to collisions, forces, and movement commands within the game world. Rigidbody is crucial for creating realistic interactions between game elements, enhancing gameplay dynamics, and ensuring objects behave according to the laws of physics, making games more immersive and engaging for players.
What is the Collider component, what colliders exist? what is the difference between in isTriggered mode and normal?
The Collider component in Unity defines the shape of objects for collision detection and physics. There are types like Box, Sphere, Capsule, and Mesh Colliders.
In normal mode, colliders physically interact with other objects, affecting their movement. In isTrigger mode, colliders don’t cause physical reactions but trigger events when other objects enter their area, useful for creating game mechanics like checkpoints or collectibles.
What is Sprite Renderer component, What is the difference between order in layer and sorting layer?
The Sprite Renderer component in Unity displays 2D graphics or sprites on GameObjects in the game world. It allows developers to easily add visual elements like characters, backgrounds, or objects to their games.
Order in Layer: Determines the rendering order of sprites on the same sorting layer. Higher values appear in front of lower values.
Sorting Layer: Defines groups of sprites that can be controlled independently for rendering order. Each layer can have its own order of rendering relative to other layers.
Name the main life cycle of monobehaviour
The main lifecycle of MonoBehaviour in Unity consists of several key methods:
Awake(): Called when the script instance is being loaded. This is used for initializing variables or setting up references.
Start(): Called before the first frame update, after Awake(). This is often used for initialization that requires all GameObjects to be initialized.
Update(): Called once per frame. This is where most of the game’s code and logic for updating GameObjects typically goes.
FixedUpdate(): Called at fixed intervals, typically used for physics-related calculations. This ensures that physics calculations are consistent across different devices.
LateUpdate(): Called after all Update() functions have been called. Useful for actions that need to be performed after all Update() calls are completed.
OnEnable(): Called when the object becomes enabled and active. This is often used for re-initialization or setting up event listeners.
OnDisable(): Called when the object becomes disabled or inactive. This is used for cleaning up resources or removing event listeners.
OnDestroy(): Called when the GameObject is being destroyed. This is where you release any resources or perform any final cleanup.
Understanding and utilizing these methods correctly is crucial for managing the behavior and lifecycle of GameObjects in Unity.
How do the the methods of Awake, Start, Update, Fixed Update Works?
Awake(): This method is called when the script instance is being loaded. It’s typically used for initialization tasks that need to be performed before the game starts. Awake() is called even if the GameObject is not active in the scene hierarchy.
Start(): Start() is called before the first frame update, after Awake(). It’s often used for initialization that requires all GameObjects to be initialized. Start() is not called if the GameObject is inactive.
Update(): Update() is called once per frame and is where most of the game’s code and logic for updating GameObjects typically goes. It’s used for tasks that need to be performed continuously, such as player input processing, animation updates, or AI behavior.
FixedUpdate(): FixedUpdate() is called at fixed intervals, typically used for physics-related calculations. This ensures that physics calculations are consistent across different devices, regardless of the frame rate. FixedUpdate() is commonly used for rigidbody physics interactions, such as applying forces or performing movement calculations.
Why is it better to use Update, Fixed Update, Late Update?
The choice between Update, FixedUpdate, and LateUpdate depends on the specific needs of your game and the behavior you’re implementing:
Update(): This method is called once per frame and is suitable for general-purpose code that doesn’t require strict timing, such as player input processing, animation updates, or non-physics related movement. However, Update() is not frame-rate independent, so if the frame rate drops, the game might appear less smooth.
FixedUpdate(): FixedUpdate() is called at fixed intervals, typically synchronized with the physics engine’s fixed time step. It’s ideal for physics-related calculations, such as applying forces, performing movement calculations for Rigidbody objects, or handling physics-based interactions. FixedUpdate() ensures consistent physics behavior across different devices, regardless of the frame rate.
LateUpdate(): LateUpdate() is called after all Update() functions have been called. It’s useful for actions that need to be performed after other updates have occurred, such as camera follow behavior, where you want the camera to update after the player has moved. LateUpdate() can also be used for procedural animation updates or other actions that depend on the state of GameObjects after all other updates have been processed.
Choosing the appropriate update method ensures that your game behaves correctly and efficiently. By using FixedUpdate() for physics-related calculations, you ensure consistent physics behavior. Update() is suitable for general-purpose code, while LateUpdate() is useful for actions that need to be synchronized with the state of GameObjects after other updates have occurred.
Is it possible to change the frequency of Fixed update, why do it?
Yes, you can change the frequency of FixedUpdate by adjusting the Fixed Timestep in the Time settings. This is done to balance performance and accuracy:
Lower Frequency (increase timestep): Improves performance but reduces physics accuracy.
Higher Frequency (decrease timestep): Enhances physics accuracy but can impact performance.
Difference between Awake and Start method, what are they used for ?
Awake() is called when the script instance is being loaded, used for initialization tasks like setting up references.
Start() is called before the first frame update, often used for initialization requiring all GameObjects to be initialized.
What is scriptable object, what pattern programming does it implement?
A ScriptableObject is a data container in Unity used to store data independently from scene instances. It implements the Singleton pattern, allowing shared data to be accessed by multiple objects without duplicating the data.
What is PlayerPrefs, What types of data can be stored there? What storage ways do you use at unity?
PlayerPrefs is a class in Unity used to store and retrieve player preferences between game sessions. It can store integers, floats, and strings. In Unity, other storage methods include serialization, databases, and cloud services for more complex data management.
What animation did you work with?
I’ve primarily worked with built-in Unity animations, utilizing features like Animator and Animation components to create movement and interactions within the game world.
What is DrawCalls, and how do you reduce their numbers? How does butching work?
Draw Calls are requests made by the CPU to the GPU to render objects. Reducing their numbers improves performance. This can be done by optimizing materials, using texture atlases, and combining meshes. Batching combines multiple objects into a single draw call, reducing CPU-GPU communication overhead.
What is Sprite Atlas, When Should you ask an artist to pack sprites in a sprite atlas when it’s easier to do in editor?
A Sprite Atlas is a collection of multiple sprites packed into a single texture. You should ask an artist to pack sprites in a sprite atlas when:
There are many sprites in the scene.
Sprites share similar textures or materials.
You need to reduce draw calls and improve performance.
It’s beneficial even when it’s easier to do in the editor because it optimizes rendering performance and memory usage.
How can I optimize the size of an application?
You can optimize the size of an application in Unity by:
Texture Compression: Use appropriate texture compression settings to reduce the size of textures.
Asset Bundles: Load assets dynamically through asset bundles to reduce initial build size.
Code Stripping: Use code stripping options to remove unused code and reduce build size.
Audio Compression: Compress audio files to reduce their size without significant loss in quality.
Asset Optimization: Remove unnecessary assets, scripts, and components to reduce build size.
Platform-specific Optimization: Optimize assets and settings for specific target platforms to reduce build size.
Build Settings: Adjust build settings to exclude unnecessary files and reduce build size.
What ways do you know to optimize the use of RAM?
To optimize the use of RAM in Unity:
Asset Compression: Compress textures, audio, and other assets to reduce memory usage.
Asset Streaming: Load assets dynamically as needed instead of loading everything upfront.
Texture Size Reduction: Use smaller texture sizes and mipmaps to reduce memory consumption.
Object Pooling: Reuse GameObjects instead of instantiating and destroying them to minimize memory allocations.
Memory Profiling: Use Unity’s profiler to identify memory-intensive areas and optimize accordingly.
Texture Atlases: Combine multiple textures into a single atlas to reduce memory overhead.
Script Optimization: Optimize scripts to minimize memory allocations and deallocations.
Platform-specific Optimization: Optimize settings and assets for target platforms to reduce memory usage.
What is Unity Events?
Unity Events are a messaging system used to trigger functions or methods in response to specific actions or conditions in Unity. They allow for decoupling of code and can be used to create modular and flexible systems.
What is Raycast?
A Raycast is a physics-based method in Unity used to detect objects along a line or ray in a scene. It’s commonly used for collision detection, object interaction, and determining line of sight.
Did you work with ParticleSystem? what problems can be solved by using Particle System?
Yes, I’ve worked with Particle Systems in Unity. They’re used to create various visual effects like fire, smoke, explosions, and magical spells. Particle Systems can solve problems such as creating realistic environmental effects, enhancing gameplay feedback, and adding visual polish to games.
What particle systems do you use to create the effects of fire, smoke, or explosion in games?
To create effects like fire, smoke, or explosions in Unity games, I utilize the Particle System component. This component allows me to generate and control thousands of particles, defining their behavior, appearance, and interaction with the environment.
What is Canvas? How many canvas can exist on stage at the same time? When is there a need to make more than one canvas?
In Unity, “Canvas” is a UI (User Interface) component used to render UI elements like buttons, text, and images in a game. There is no limit to the number of canvases that can exist on the stage simultaneously. Multiple canvases are often used when different UI elements require separate rendering settings or when organizing UI elements into layers for better management and performance.
What Renderer modes in Canvases, Do you know what is each of them used for?
In Unity’s Canvas component, there are three Renderer modes:
Overlay: Renders UI elements on top of the scene, ignoring depth and perspective. It’s commonly used for HUDs, menus, and other elements that should always be visible.
Camera: Renders UI elements within a specified camera’s view, allowing for depth and perspective effects. It’s useful for creating UI that interacts with the 3D environment or for creating in-world UI elements.
World Space: Renders UI elements as 3D objects within the scene, allowing for interaction with other GameObjects. It’s used for complex UI elements that need to move, rotate, or scale in 3D space.
How will the procedure for drawing UI Elements be determined? What can you control?
The procedure for drawing UI elements in Unity is determined by the Canvas component’s properties and hierarchy. You can control:
Canvas Render Mode: Choose between Overlay, Camera, or World Space to define how the UI is rendered.
UI Element Hierarchy: The order of UI elements in the hierarchy determines their drawing order.
Sorting Layer and Order in Layer: Control the rendering order of canvases and UI elements within the same layer.
Anchors and Pivot: Define the position and scaling behavior of UI elements relative to their parent.
RectTransform: Adjust size, position, and rotation of UI elements.
These settings allow you to manage the layout, rendering, and interaction of UI elements effectively.
If you were given a choice of two extensions under which the UI designer should draw a screen reference, what kind of extension would it be?
The two extensions I would recommend for a UI designer to draw a screen reference in Unity are:
PSD (Photoshop Document): For detailed and layered design files.
SVG (Scalable Vector Graphics): For resolution-independent vector graphics.
These formats ensure high-quality and scalable UI design assets.
Did you work with TextMeshPro, what is its disadvantage?
Yes, I’ve worked with TextMeshPro in Unity. Its main disadvantage is that it can be more complex and resource-intensive to set up and manage compared to the default UI Text component. However, it offers superior text rendering quality and features.
What is DOTween?
DOTween is a popular tweening library for Unity, used to create smooth and fluid animations. It simplifies the process of animating properties like position, scale, rotation, and color, making it easier to create dynamic and polished effects in games.
What other third party extensions have you used for Unity?
TextMesh Pro: For high-quality text rendering and typography.
What are prefabs? What is the difference between Prefabs and a scene?
Prefabs in Unity are preconfigured GameObjects saved as reusable assets. They allow for easy duplication and reuse of GameObjects across scenes.
The main difference between Prefabs and a scene is that Prefabs are reusable templates of GameObjects that can be instantiated multiple times across different scenes, while a scene is a specific environment containing GameObjects, lighting, and other elements that make up a level or portion of a game world.
What are the main components of Unity?
Unity Editor: This is the main interface where developers create and modify their projects. It includes tools for scene editing, asset management, scripting, debugging, and more.
Game Engine: At the core of Unity is its game engine, which handles rendering, physics, audio, scripting, networking, and other low-level functionalities needed to run a game.
Scripting: Unity supports scripting in C# (and previously UnityScript, which is now deprecated). Developers use scripts to define game behavior, interactions, and logic.
Asset Store: A marketplace where developers can buy and sell assets such as 3D models, textures, scripts, and tools. It’s a valuable resource for accelerating development.
Physics Engine: Unity includes a built-in physics engine (PhysX) for simulating realistic physical interactions between objects in the game world.
Graphics: Unity supports high-quality 2D and 3D graphics through its rendering pipeline, shaders, lighting system, and post-processing effects.
Cross-Platform Support: Unity allows developers to build games for multiple platforms including PC, consoles, mobile devices, VR/AR devices, and the web, using a single codebase.
MonoDevelop / Visual Studio Integration: Unity integrates with popular IDEs (Integrated Development Environments) like Visual Studio and MonoDevelop for coding and debugging.
Multiplatform Deployment: Unity enables developers to deploy games to a variety of platforms including iOS, Android, Windows, macOS, Linux, PlayStation, Xbox, and more.
Analytics and Monetization: Unity provides tools for analytics to understand player behavior and optimize games, as well as features for in-app purchases and ads to monetize games.
What file types can be downloaded from Unity?
Unity Package (.unitypackage): Contains assets, scripts, and scenes for sharing between projects.
Asset Bundle: A collection of assets packaged for dynamic loading during runtime.
Build Executables: Compiled applications for various platforms, such as Windows, macOS, iOS, Android, etc.
Scene File (.unity): Contains the layout and content of a Unity scene.
Prefab File (.prefab): Contains a template of a GameObject with its components and properties.
Script File (.cs, .js, .shader): Code files written in languages like C#, JavaScript, or shader languages.
Model File (.fbx, .obj): 3D model files used for importing meshes, materials, and animations into Unity.
Texture File (.png, .jpg, .tga): Image files used for texturing 3D models and UI elements.
Audio File (.wav, .mp3): Sound files used for audio effects and music in the game.
Did you work UI? What are the problems with this?
Yes, I’ve worked with UI in Unity. Some common problems with UI development include:
Scaling: Ensuring UI elements look and function correctly across different screen sizes and resolutions.
Performance: Optimizing UI rendering and interaction for smooth performance, especially in complex UI layouts.
Localization: Supporting multiple languages and text layouts within the UI.
Interactivity: Implementing user input handling and responsiveness for UI elements like buttons and sliders.
Dynamic Content: Handling dynamic content and data-driven UI updates efficiently.
Accessibility: Ensuring UI elements are accessible to users with disabilities, such as screen readers or alternative input methods.
What special folders whose names are reserved by unity do you know? what are they used for?
Some special folders reserved by Unity include:
Assets: Contains all project assets, including scripts, textures, models, and scenes.
Editor: Used for scripts and assets that only run in the Unity Editor, such as custom inspectors or editor extensions.
Resources: Contains assets that are loaded at runtime using the Resources.Load() method.
StreamingAssets: Stores data files that are included with the built application and can be accessed at runtime.
Plugins: Used for native plugins and libraries for specific target platforms.
Prefabs: Contains prefabricated GameObjects saved as assets for easy reuse and instantiation.
What profiling tools in Unity editor do you know?
Some profiling tools in the Unity Editor include:
Profiler: Provides detailed performance data, including CPU, GPU, memory usage, and rendering statistics.
Frame Debugger: Allows inspection of draw calls, shaders, and textures to identify rendering bottlenecks.
Memory Profiler: Helps analyze memory usage and identify memory leaks and allocations.
GPU Profiler: Specifically focuses on GPU performance, including rendering time and GPU memory usage.
Profiler Window: Displays real-time performance metrics during play mode to identify performance issues.
Do you know about Layer Collision Matrix? what it is used for?
Yes, the layer collision matrix in Unity is a tool used to control which layers interact with each other in terms of collision detection and physics simulation. It allows developers to specify which layers can collide with each other and which layers should ignore collisions.
For example, you might use the layer collision matrix to ensure that player characters collide with obstacles and enemies but pass through each other, or to prevent certain objects like pickups from colliding with the environment.
Overall, the layer collision matrix helps manage collision behavior efficiently and accurately within Unity games.
What is Unit testing? Did you use test run in Unity?
Unit testing is a software testing method where individual units or components of a program are tested in isolation to ensure they work as expected. It involves writing code to verify that specific functions or methods produce the correct output for a given input.
Yes, I’ve used test runners in Unity to automate the execution of unit tests. This helps ensure the reliability and stability of the codebase, especially when making changes or adding new features.
What is the situation with asynchrony in Unity? what are the pitfalls?
In Unity, asynchronous programming allows tasks to run concurrently without blocking the main thread, improving performance and responsiveness. However, pitfalls can include:
Callback Hell: Nested callbacks can lead to complex and hard-to-read code.
Race Conditions: Asynchronous tasks can create race conditions if not synchronized properly, leading to unpredictable behavior.
Memory Leaks: Not properly managing asynchronous resources can result in memory leaks.
UI Updates: Updating UI elements from asynchronous tasks may require careful synchronization to avoid errors.
Error Handling: Errors in asynchronous tasks might not be caught if error handling is not implemented correctly.
What types of rendering you know, what are the difference in them?
The main types of rendering in Unity are:
Forward Rendering: Processes each object in the scene individually, calculating lighting and shading per object. Suitable for scenes with fewer lights and simpler shaders.
Deferred Rendering: Separates the rendering process into two stages: geometry and lighting. Allows for more complex lighting effects and supports a higher number of lights.
Legacy Rendering Pipeline: The older rendering pipeline in Unity, now replaced by the Scriptable Render Pipeline (SRP). Provides less flexibility and performance compared to SRP.
The key differences lie in how they handle lighting, shading, and performance, with each rendering method having its own strengths and weaknesses depending on the requirements of the project.
What is Render pipeline, what RP presets do you know?
A Render Pipeline in Unity defines the sequence of steps and techniques used to render scenes. It includes processes like geometry rendering, lighting, and post-processing effects.
Some Render Pipeline (RP) presets in Unity include:
Built-in Render Pipeline (BRP): The default rendering pipeline in Unity, now replaced by the Scriptable Render Pipeline (SRP).
Universal Render Pipeline (URP): Formerly known as the Lightweight Render Pipeline (LWRP), optimized for performance and efficiency on a wide range of platforms, including mobile devices.
High Definition Render Pipeline (HDRP): Designed for high-fidelity graphics and realistic rendering, targeting high-end platforms like PC and consoles.
What is the baking of light?
Light baking is the process of precomputing and storing lighting information in a scene to improve rendering performance. It calculates how light interacts with surfaces and objects and stores this information in textures or lightmaps. This allows for more efficient rendering by reducing the need for dynamic lighting calculations at runtime, especially in scenes with static or semi-static lighting setups.
What is better: 100 objects of objects that cause a separate Update or 1 object that causes 1000 updates? Why?
In most cases, it’s better to have one object that causes 1000 updates rather than 1000 objects that each cause a separate update. This is because Unity’s overhead for managing individual GameObjects and their Update calls can become significant when dealing with a large number of objects. Consolidating updates into fewer objects reduces this overhead and can lead to better overall performance.
How do you understand the concept of SI/CD
SI/CD, or Continuous Integration/Continuous Deployment, is a development practice where code changes are automatically built, tested, and deployed to production environments in a continuous manner. It involves automating the entire software delivery pipeline to ensure that changes are quickly and consistently delivered to end-users.
Explain the use of Coroutine
Coroutines in Unity are used to execute code over multiple frames, allowing for delays, waiting for conditions, or running tasks asynchronously without blocking the main thread. They are useful for animations, timed events, and handling asynchronous operations like loading resources.
What roles the inspector, hierarchy, project, scene panels in the Unity editor have. Which is responsible for referencing the content that will be included in the build process?
Inspector: Displays and allows editing of properties for selected GameObjects or assets.
Hierarchy: Shows all GameObjects in the current scene, organized in a parent-child structure.
Project: Contains all assets and files in the project, organized in folders.
Scene: Visual representation of the game world, where you place and manipulate GameObjects.
The Project panel is responsible for referencing the content that will be included in the build process.
Explain the concept of scriptable objects
ScriptableObjects in Unity are assets that store data. They allow you to create and manage data independent of scene objects. This makes them useful for sharing configuration data, settings, and other information across multiple instances and scenes without duplicating the data. They help in creating modular and easily maintainable code.
What is the Unity camera component and how does it work?
The Unity Camera component is used to capture and display the game world from a specific viewpoint. It works by rendering the scene onto the screen, controlling what is visible based on its position, rotation, and settings like field of view and clipping planes. It supports various effects like depth of field and motion blur and can be adjusted for perspective or orthographic views.