Theory Flashcards

1
Q

What are the basic principles of Unity architecture and how do they affect game development?

A

The basic principles of Unity architecture revolve around GameObjects, Components, Scenes, and Scripts:

GameObjects: These are the fundamental entities in Unity, representing characters, props, cameras, lights, etc.

Components: Components are attached to GameObjects to add functionality or behavior. Examples include Rigidbody for physics, Renderer for visuals, and Scripts for custom logic.

Scenes: Scenes are collections of GameObjects that make up a level or a portion of a game world. They help organize the game’s structure and content.

Scripts: Scripts are written in languages like C# and are attached to GameObjects as Components. They define the behavior and interactions of GameObjects in the game.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What Phyics mechanics have you used to create realistic object movement?

A

I’ve utilized rigid body dynamics, which simulate the motion and interactions of objects based on their mass, forces, and collisions. Additionally, I’ve employed techniques like raycasting for precise object detection and physics-based animation blending for smoother movement transitions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Rigidbody component?

A

The Rigidbody component in Unity is essential for simulating realistic physics in games. It’s attached to GameObjects to give them physical properties like mass, gravity, and forces. This allows objects to respond naturally to collisions, forces, and movement commands within the game world. Rigidbody is crucial for creating realistic interactions between game elements, enhancing gameplay dynamics, and ensuring objects behave according to the laws of physics, making games more immersive and engaging for players.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Collider component, what colliders exist? what is the difference between in isTriggered mode and normal?

A

The Collider component in Unity defines the shape of objects for collision detection and physics. There are types like Box, Sphere, Capsule, and Mesh Colliders.

In normal mode, colliders physically interact with other objects, affecting their movement. In isTrigger mode, colliders don’t cause physical reactions but trigger events when other objects enter their area, useful for creating game mechanics like checkpoints or collectibles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Sprite Renderer component, What is the difference between order in layer and sorting layer?

A

The Sprite Renderer component in Unity displays 2D graphics or sprites on GameObjects in the game world. It allows developers to easily add visual elements like characters, backgrounds, or objects to their games.

Order in Layer: Determines the rendering order of sprites on the same sorting layer. Higher values appear in front of lower values.

Sorting Layer: Defines groups of sprites that can be controlled independently for rendering order. Each layer can have its own order of rendering relative to other layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name the main life cycle of monobehaviour

A

The main lifecycle of MonoBehaviour in Unity consists of several key methods:

Awake(): Called when the script instance is being loaded. This is used for initializing variables or setting up references.

Start(): Called before the first frame update, after Awake(). This is often used for initialization that requires all GameObjects to be initialized.

Update(): Called once per frame. This is where most of the game’s code and logic for updating GameObjects typically goes.

FixedUpdate(): Called at fixed intervals, typically used for physics-related calculations. This ensures that physics calculations are consistent across different devices.

LateUpdate(): Called after all Update() functions have been called. Useful for actions that need to be performed after all Update() calls are completed.

OnEnable(): Called when the object becomes enabled and active. This is often used for re-initialization or setting up event listeners.

OnDisable(): Called when the object becomes disabled or inactive. This is used for cleaning up resources or removing event listeners.

OnDestroy(): Called when the GameObject is being destroyed. This is where you release any resources or perform any final cleanup.

Understanding and utilizing these methods correctly is crucial for managing the behavior and lifecycle of GameObjects in Unity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do the the methods of Awake, Start, Update, Fixed Update Works?

A

Awake(): This method is called when the script instance is being loaded. It’s typically used for initialization tasks that need to be performed before the game starts. Awake() is called even if the GameObject is not active in the scene hierarchy.

Start(): Start() is called before the first frame update, after Awake(). It’s often used for initialization that requires all GameObjects to be initialized. Start() is not called if the GameObject is inactive.

Update(): Update() is called once per frame and is where most of the game’s code and logic for updating GameObjects typically goes. It’s used for tasks that need to be performed continuously, such as player input processing, animation updates, or AI behavior.

FixedUpdate(): FixedUpdate() is called at fixed intervals, typically used for physics-related calculations. This ensures that physics calculations are consistent across different devices, regardless of the frame rate. FixedUpdate() is commonly used for rigidbody physics interactions, such as applying forces or performing movement calculations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is it better to use Update, Fixed Update, Late Update?

A

The choice between Update, FixedUpdate, and LateUpdate depends on the specific needs of your game and the behavior you’re implementing:

Update(): This method is called once per frame and is suitable for general-purpose code that doesn’t require strict timing, such as player input processing, animation updates, or non-physics related movement. However, Update() is not frame-rate independent, so if the frame rate drops, the game might appear less smooth.

FixedUpdate(): FixedUpdate() is called at fixed intervals, typically synchronized with the physics engine’s fixed time step. It’s ideal for physics-related calculations, such as applying forces, performing movement calculations for Rigidbody objects, or handling physics-based interactions. FixedUpdate() ensures consistent physics behavior across different devices, regardless of the frame rate.

LateUpdate(): LateUpdate() is called after all Update() functions have been called. It’s useful for actions that need to be performed after other updates have occurred, such as camera follow behavior, where you want the camera to update after the player has moved. LateUpdate() can also be used for procedural animation updates or other actions that depend on the state of GameObjects after all other updates have been processed.

Choosing the appropriate update method ensures that your game behaves correctly and efficiently. By using FixedUpdate() for physics-related calculations, you ensure consistent physics behavior. Update() is suitable for general-purpose code, while LateUpdate() is useful for actions that need to be synchronized with the state of GameObjects after other updates have occurred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Is it possible to change the frequency of Fixed update, why do it?

A

Yes, you can change the frequency of FixedUpdate by adjusting the Fixed Timestep in the Time settings. This is done to balance performance and accuracy:

Lower Frequency (increase timestep): Improves performance but reduces physics accuracy.
Higher Frequency (decrease timestep): Enhances physics accuracy but can impact performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Difference between Awake and Start method, what are they used for ?

A

Awake() is called when the script instance is being loaded, used for initialization tasks like setting up references.

Start() is called before the first frame update, often used for initialization requiring all GameObjects to be initialized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is scriptable object, what pattern programming does it implement?

A

A ScriptableObject is a data container in Unity used to store data independently from scene instances. It implements the Singleton pattern, allowing shared data to be accessed by multiple objects without duplicating the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is PlayerPrefs, What types of data can be stored there? What storage ways do you use at unity?

A

PlayerPrefs is a class in Unity used to store and retrieve player preferences between game sessions. It can store integers, floats, and strings. In Unity, other storage methods include serialization, databases, and cloud services for more complex data management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What animation did you work with?

A

I’ve primarily worked with built-in Unity animations, utilizing features like Animator and Animation components to create movement and interactions within the game world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is DrawCalls, and how do you reduce their numbers? How does butching work?

A

Draw Calls are requests made by the CPU to the GPU to render objects. Reducing their numbers improves performance. This can be done by optimizing materials, using texture atlases, and combining meshes. Batching combines multiple objects into a single draw call, reducing CPU-GPU communication overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Sprite Atlas, When Should you ask an artist to pack sprites in a sprite atlas when it’s easier to do in editor?

A

A Sprite Atlas is a collection of multiple sprites packed into a single texture. You should ask an artist to pack sprites in a sprite atlas when:

There are many sprites in the scene.
Sprites share similar textures or materials.
You need to reduce draw calls and improve performance.
It’s beneficial even when it’s easier to do in the editor because it optimizes rendering performance and memory usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can I optimize the size of an application?

A

You can optimize the size of an application in Unity by:

Texture Compression: Use appropriate texture compression settings to reduce the size of textures.
Asset Bundles: Load assets dynamically through asset bundles to reduce initial build size.
Code Stripping: Use code stripping options to remove unused code and reduce build size.
Audio Compression: Compress audio files to reduce their size without significant loss in quality.
Asset Optimization: Remove unnecessary assets, scripts, and components to reduce build size.
Platform-specific Optimization: Optimize assets and settings for specific target platforms to reduce build size.
Build Settings: Adjust build settings to exclude unnecessary files and reduce build size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What ways do you know to optimize the use of RAM?

A

To optimize the use of RAM in Unity:

Asset Compression: Compress textures, audio, and other assets to reduce memory usage.
Asset Streaming: Load assets dynamically as needed instead of loading everything upfront.
Texture Size Reduction: Use smaller texture sizes and mipmaps to reduce memory consumption.
Object Pooling: Reuse GameObjects instead of instantiating and destroying them to minimize memory allocations.
Memory Profiling: Use Unity’s profiler to identify memory-intensive areas and optimize accordingly.
Texture Atlases: Combine multiple textures into a single atlas to reduce memory overhead.
Script Optimization: Optimize scripts to minimize memory allocations and deallocations.
Platform-specific Optimization: Optimize settings and assets for target platforms to reduce memory usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is Unity Events?

A

Unity Events are a messaging system used to trigger functions or methods in response to specific actions or conditions in Unity. They allow for decoupling of code and can be used to create modular and flexible systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is Raycast?

A

A Raycast is a physics-based method in Unity used to detect objects along a line or ray in a scene. It’s commonly used for collision detection, object interaction, and determining line of sight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Did you work with ParticleSystem? what problems can be solved by using Particle System?

A

Yes, I’ve worked with Particle Systems in Unity. They’re used to create various visual effects like fire, smoke, explosions, and magical spells. Particle Systems can solve problems such as creating realistic environmental effects, enhancing gameplay feedback, and adding visual polish to games.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What particle systems do you use to create the effects of fire, smoke, or explosion in games?

A

To create effects like fire, smoke, or explosions in Unity games, I utilize the Particle System component. This component allows me to generate and control thousands of particles, defining their behavior, appearance, and interaction with the environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Canvas? How many canvas can exist on stage at the same time? When is there a need to make more than one canvas?

A

In Unity, “Canvas” is a UI (User Interface) component used to render UI elements like buttons, text, and images in a game. There is no limit to the number of canvases that can exist on the stage simultaneously. Multiple canvases are often used when different UI elements require separate rendering settings or when organizing UI elements into layers for better management and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What Renderer modes in Canvases, Do you know what is each of them used for?

A

In Unity’s Canvas component, there are three Renderer modes:

Overlay: Renders UI elements on top of the scene, ignoring depth and perspective. It’s commonly used for HUDs, menus, and other elements that should always be visible.

Camera: Renders UI elements within a specified camera’s view, allowing for depth and perspective effects. It’s useful for creating UI that interacts with the 3D environment or for creating in-world UI elements.

World Space: Renders UI elements as 3D objects within the scene, allowing for interaction with other GameObjects. It’s used for complex UI elements that need to move, rotate, or scale in 3D space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How will the procedure for drawing UI Elements be determined? What can you control?

A

The procedure for drawing UI elements in Unity is determined by the Canvas component’s properties and hierarchy. You can control:

Canvas Render Mode: Choose between Overlay, Camera, or World Space to define how the UI is rendered.
UI Element Hierarchy: The order of UI elements in the hierarchy determines their drawing order.
Sorting Layer and Order in Layer: Control the rendering order of canvases and UI elements within the same layer.
Anchors and Pivot: Define the position and scaling behavior of UI elements relative to their parent.
RectTransform: Adjust size, position, and rotation of UI elements.
These settings allow you to manage the layout, rendering, and interaction of UI elements effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

If you were given a choice of two extensions under which the UI designer should draw a screen reference, what kind of extension would it be?

A

The two extensions I would recommend for a UI designer to draw a screen reference in Unity are:

PSD (Photoshop Document): For detailed and layered design files.
SVG (Scalable Vector Graphics): For resolution-independent vector graphics.
These formats ensure high-quality and scalable UI design assets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Did you work with TextMeshPro, what is its disadvantage?

A

Yes, I’ve worked with TextMeshPro in Unity. Its main disadvantage is that it can be more complex and resource-intensive to set up and manage compared to the default UI Text component. However, it offers superior text rendering quality and features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is DOTween?

A

DOTween is a popular tweening library for Unity, used to create smooth and fluid animations. It simplifies the process of animating properties like position, scale, rotation, and color, making it easier to create dynamic and polished effects in games.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What other third party extensions have you used for Unity?

A

TextMesh Pro: For high-quality text rendering and typography.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are prefabs? What is the difference between Prefabs and a scene?

A

Prefabs in Unity are preconfigured GameObjects saved as reusable assets. They allow for easy duplication and reuse of GameObjects across scenes.

The main difference between Prefabs and a scene is that Prefabs are reusable templates of GameObjects that can be instantiated multiple times across different scenes, while a scene is a specific environment containing GameObjects, lighting, and other elements that make up a level or portion of a game world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the main components of Unity?

A

Unity Editor: This is the main interface where developers create and modify their projects. It includes tools for scene editing, asset management, scripting, debugging, and more.

Game Engine: At the core of Unity is its game engine, which handles rendering, physics, audio, scripting, networking, and other low-level functionalities needed to run a game.

Scripting: Unity supports scripting in C# (and previously UnityScript, which is now deprecated). Developers use scripts to define game behavior, interactions, and logic.

Asset Store: A marketplace where developers can buy and sell assets such as 3D models, textures, scripts, and tools. It’s a valuable resource for accelerating development.

Physics Engine: Unity includes a built-in physics engine (PhysX) for simulating realistic physical interactions between objects in the game world.

Graphics: Unity supports high-quality 2D and 3D graphics through its rendering pipeline, shaders, lighting system, and post-processing effects.

Cross-Platform Support: Unity allows developers to build games for multiple platforms including PC, consoles, mobile devices, VR/AR devices, and the web, using a single codebase.

MonoDevelop / Visual Studio Integration: Unity integrates with popular IDEs (Integrated Development Environments) like Visual Studio and MonoDevelop for coding and debugging.

Multiplatform Deployment: Unity enables developers to deploy games to a variety of platforms including iOS, Android, Windows, macOS, Linux, PlayStation, Xbox, and more.

Analytics and Monetization: Unity provides tools for analytics to understand player behavior and optimize games, as well as features for in-app purchases and ads to monetize games.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What file types can be downloaded from Unity?

A

Unity Package (.unitypackage): Contains assets, scripts, and scenes for sharing between projects.

Asset Bundle: A collection of assets packaged for dynamic loading during runtime.

Build Executables: Compiled applications for various platforms, such as Windows, macOS, iOS, Android, etc.

Scene File (.unity): Contains the layout and content of a Unity scene.

Prefab File (.prefab): Contains a template of a GameObject with its components and properties.

Script File (.cs, .js, .shader): Code files written in languages like C#, JavaScript, or shader languages.

Model File (.fbx, .obj): 3D model files used for importing meshes, materials, and animations into Unity.

Texture File (.png, .jpg, .tga): Image files used for texturing 3D models and UI elements.

Audio File (.wav, .mp3): Sound files used for audio effects and music in the game.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Did you work UI? What are the problems with this?

A

Yes, I’ve worked with UI in Unity. Some common problems with UI development include:

Scaling: Ensuring UI elements look and function correctly across different screen sizes and resolutions.
Performance: Optimizing UI rendering and interaction for smooth performance, especially in complex UI layouts.
Localization: Supporting multiple languages and text layouts within the UI.
Interactivity: Implementing user input handling and responsiveness for UI elements like buttons and sliders.
Dynamic Content: Handling dynamic content and data-driven UI updates efficiently.
Accessibility: Ensuring UI elements are accessible to users with disabilities, such as screen readers or alternative input methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What special folders whose names are reserved by unity do you know? what are they used for?

A

Some special folders reserved by Unity include:

Assets: Contains all project assets, including scripts, textures, models, and scenes.

Editor: Used for scripts and assets that only run in the Unity Editor, such as custom inspectors or editor extensions.

Resources: Contains assets that are loaded at runtime using the Resources.Load() method.

StreamingAssets: Stores data files that are included with the built application and can be accessed at runtime.

Plugins: Used for native plugins and libraries for specific target platforms.

Prefabs: Contains prefabricated GameObjects saved as assets for easy reuse and instantiation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What profiling tools in Unity editor do you know?

A

Some profiling tools in the Unity Editor include:

Profiler: Provides detailed performance data, including CPU, GPU, memory usage, and rendering statistics.

Frame Debugger: Allows inspection of draw calls, shaders, and textures to identify rendering bottlenecks.

Memory Profiler: Helps analyze memory usage and identify memory leaks and allocations.

GPU Profiler: Specifically focuses on GPU performance, including rendering time and GPU memory usage.

Profiler Window: Displays real-time performance metrics during play mode to identify performance issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Do you know about Layer Collision Matrix? what it is used for?

A

Yes, the layer collision matrix in Unity is a tool used to control which layers interact with each other in terms of collision detection and physics simulation. It allows developers to specify which layers can collide with each other and which layers should ignore collisions.

For example, you might use the layer collision matrix to ensure that player characters collide with obstacles and enemies but pass through each other, or to prevent certain objects like pickups from colliding with the environment.

Overall, the layer collision matrix helps manage collision behavior efficiently and accurately within Unity games.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is Unit testing? Did you use test run in Unity?

A

Unit testing is a software testing method where individual units or components of a program are tested in isolation to ensure they work as expected. It involves writing code to verify that specific functions or methods produce the correct output for a given input.

Yes, I’ve used test runners in Unity to automate the execution of unit tests. This helps ensure the reliability and stability of the codebase, especially when making changes or adding new features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the situation with asynchrony in Unity? what are the pitfalls?

A

In Unity, asynchronous programming allows tasks to run concurrently without blocking the main thread, improving performance and responsiveness. However, pitfalls can include:

Callback Hell: Nested callbacks can lead to complex and hard-to-read code.

Race Conditions: Asynchronous tasks can create race conditions if not synchronized properly, leading to unpredictable behavior.

Memory Leaks: Not properly managing asynchronous resources can result in memory leaks.

UI Updates: Updating UI elements from asynchronous tasks may require careful synchronization to avoid errors.

Error Handling: Errors in asynchronous tasks might not be caught if error handling is not implemented correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What types of rendering you know, what are the difference in them?

A

The main types of rendering in Unity are:

Forward Rendering: Processes each object in the scene individually, calculating lighting and shading per object. Suitable for scenes with fewer lights and simpler shaders.

Deferred Rendering: Separates the rendering process into two stages: geometry and lighting. Allows for more complex lighting effects and supports a higher number of lights.

Legacy Rendering Pipeline: The older rendering pipeline in Unity, now replaced by the Scriptable Render Pipeline (SRP). Provides less flexibility and performance compared to SRP.

The key differences lie in how they handle lighting, shading, and performance, with each rendering method having its own strengths and weaknesses depending on the requirements of the project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is Render pipeline, what RP presets do you know?

A

A Render Pipeline in Unity defines the sequence of steps and techniques used to render scenes. It includes processes like geometry rendering, lighting, and post-processing effects.

Some Render Pipeline (RP) presets in Unity include:

Built-in Render Pipeline (BRP): The default rendering pipeline in Unity, now replaced by the Scriptable Render Pipeline (SRP).

Universal Render Pipeline (URP): Formerly known as the Lightweight Render Pipeline (LWRP), optimized for performance and efficiency on a wide range of platforms, including mobile devices.

High Definition Render Pipeline (HDRP): Designed for high-fidelity graphics and realistic rendering, targeting high-end platforms like PC and consoles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the baking of light?

A

Light baking is the process of precomputing and storing lighting information in a scene to improve rendering performance. It calculates how light interacts with surfaces and objects and stores this information in textures or lightmaps. This allows for more efficient rendering by reducing the need for dynamic lighting calculations at runtime, especially in scenes with static or semi-static lighting setups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is better: 100 objects of objects that cause a separate Update or 1 object that causes 1000 updates? Why?

A

In most cases, it’s better to have one object that causes 1000 updates rather than 1000 objects that each cause a separate update. This is because Unity’s overhead for managing individual GameObjects and their Update calls can become significant when dealing with a large number of objects. Consolidating updates into fewer objects reduces this overhead and can lead to better overall performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How do you understand the concept of SI/CD

A

SI/CD, or Continuous Integration/Continuous Deployment, is a development practice where code changes are automatically built, tested, and deployed to production environments in a continuous manner. It involves automating the entire software delivery pipeline to ensure that changes are quickly and consistently delivered to end-users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Explain the use of Coroutine

A

Coroutines in Unity are used to execute code over multiple frames, allowing for delays, waiting for conditions, or running tasks asynchronously without blocking the main thread. They are useful for animations, timed events, and handling asynchronous operations like loading resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What roles the inspector, hierarchy, project, scene panels in the Unity editor have. Which is responsible for referencing the content that will be included in the build process?

A

Inspector: Displays and allows editing of properties for selected GameObjects or assets.
Hierarchy: Shows all GameObjects in the current scene, organized in a parent-child structure.
Project: Contains all assets and files in the project, organized in folders.
Scene: Visual representation of the game world, where you place and manipulate GameObjects.
The Project panel is responsible for referencing the content that will be included in the build process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Explain the concept of scriptable objects

A

ScriptableObjects in Unity are assets that store data. They allow you to create and manage data independent of scene objects. This makes them useful for sharing configuration data, settings, and other information across multiple instances and scenes without duplicating the data. They help in creating modular and easily maintainable code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is the Unity camera component and how does it work?

A

The Unity Camera component is used to capture and display the game world from a specific viewpoint. It works by rendering the scene onto the screen, controlling what is visible based on its position, rotation, and settings like field of view and clipping planes. It supports various effects like depth of field and motion blur and can be adjusted for perspective or orthographic views.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Explain why Time.deltaTime should be used to make things that depend on time operate correctly

A

Time.deltaTime should be used to make things that depend on time operate correctly because it represents the time elapsed since the last frame. This ensures that movements and animations are frame-rate independent, making them smooth and consistent regardless of the frame rate.

48
Q

Explain why vectors should be normalized when used to move an object?

A

Vectors should be normalized when used to move an object to ensure consistent speed regardless of the vector’s magnitude. Normalizing a vector sets its length to 1 while preserving its direction, allowing for uniform movement regardless of distance. This prevents objects from moving faster when the vector’s magnitude is larger and maintains consistent behavior across different movement scenarios.

49
Q

Explain the concept of Scene Management in unity

A

Scene Management in Unity refers to the process of loading, unloading, and organizing scenes within a Unity project. It allows developers to create modular game levels, menus, and other content by separating them into individual scenes. Scene Management also provides tools for transitioning between scenes, controlling scene activation, and passing data between scenes. This allows for efficient development and organization of complex projects while maintaining flexibility and scalability.

50
Q

Explain the use of Unity’s audio system

A

Unity’s audio system allows developers to add sound effects and music to their games. It provides components and tools for importing audio files, adjusting volume, pitch, and spatialization, and controlling playback. Developers can use AudioSources to play sounds attached to GameObjects and manipulate them dynamically through scripts. The audio system also supports 3D audio effects, allowing sounds to be positioned in 3D space for realistic spatial audio effects. Overall, Unity’s audio system provides a comprehensive solution for integrating audio into games with ease.

51
Q

Explain the concept of shader programming in Unity

A

Shader programming in Unity involves writing code to define how graphics are rendered on the GPU. Shaders control aspects like color, lighting, and texture mapping of objects in a scene. They consist of two main types: Vertex Shaders, which manipulate the position of vertices, and Fragment Shaders, which determine the color of pixels. Unity supports ShaderLab and HLSL/Cg languages for shader development, providing flexibility and control over the visual appearance of game graphics.

52
Q

Explain the use of cineMachine

A

Cinemachine is a powerful camera system for Unity that streamlines the process of creating dynamic and cinematic camera shots. It provides features such as camera blending, procedural animation, target tracking, and shot composition tools. Cinemachine simplifies camera control and allows developers to create professional-looking camera movements and transitions with ease, enhancing the visual experience of games and cinematics.

53
Q

Explain the use of Unity’s Animation curve component

A

Unity’s Animation Curve component is used to define custom animation curves that control the interpolation of values over time. It allows developers to create smooth and precise animations by specifying keyframes with different values and tangents. Animation curves can be applied to properties like position, rotation, scale, and custom variables, providing fine-grained control over animation timing and behavior. They are commonly used in Unity’s Animation system and scripting to create dynamic and expressive animations for game objects and UI elements.

54
Q

Explain the concept of Animation in unity

A

Animation in Unity refers to the process of creating motion and changes over time for GameObjects. It involves defining keyframes to specify the position, rotation, scale, and other properties of an object at different points in time. Unity provides tools for creating animations through the Animation window, allowing developers to create complex animations through keyframe manipulation. Animations can be applied to both 3D models and 2D sprites, enhancing the visual experience of games by bringing characters, objects, and UI elements to life.

55
Q

What is Unity

A

Unity is a versatile game engine that provides tools and resources for creating interactive 2D and 3D experiences across multiple platforms, such as mobile devices, computers, and consoles.

56
Q

Why did you choose Unity over other engines?

A

I chose Unity because of its user-friendly interface, robust features, and wide community support, making it an ideal platform for game development, especially for beginners.

57
Q

Please define pixel shader in Unity

A

A pixel shader in Unity is a program that determines the color and other attributes of individual pixels on a screen. It’s used to create various visual effects, like lighting, shadows, and textures, in games and other graphical applications.

58
Q

Explain the vertex shader in Unity.

A

A vertex shader in Unity is a program that manipulates the properties of vertices in a 3D model. It’s responsible for tasks like transforming vertices from object space to screen space, applying animations, and modifying vertex attributes for rendering.

59
Q

What is the function of AssetBundle in Unity 3D?

A

AssetBundle in Unity 3D is used to package and load assets dynamically at runtime, allowing for smaller initial download sizes, efficient updates, and better management of game resources.

60
Q

Explain some characteristics of Unity 3D

A

Unity 3D offers:

Cross-platform development: Build games for various platforms like mobile, desktop, and consoles.
User-friendly interface: Intuitive tools and workflows make game development accessible.
Extensive asset store: Access a vast library of assets, scripts, and plugins to enhance game development.
Powerful rendering engine: Support for high-quality graphics, shaders, and effects.
Robust scripting support: Use C# or UnityScript for scripting gameplay logic and interactions.

61
Q

Explain the function of Inspector in the Unity engine.

A

The Inspector in Unity engine displays and allows modification of the properties of selected game objects or assets, providing a convenient interface for developers to tweak settings, attach scripts, and configure components during game development.

62
Q

How do you use the fixed time step in the engine?

A

You can use the fixed time step in Unity by setting the “Fixed Timestep” value in the Time settings. This determines the interval at which physics calculations and FixedUpdate functions are executed in the game loop, ensuring consistent behavior across different devices and frame rates.

63
Q

Explain what is deferred lighting in game development.

A

Deferred lighting is a rendering technique used in game development where lighting calculations are deferred until after the geometry has been rendered. This approach separates the rendering of geometry from the calculation of lighting, allowing for more complex lighting effects and reducing the computational overhead associated with lighting calculations.

64
Q

What are some prominent advantages of the Unity engine?

A

Prominent advantages of the Unity engine include:

Cross-platform development
User-friendly interface
Extensive asset store
Powerful rendering engine
Robust scripting support

65
Q

Does the Unity engine have any significant drawbacks? What are they?

A

Some features may require additional purchases or plugins.
Limited control over low-level graphics rendering compared to other engines.

66
Q

What is SCRUM method?

A

Scrum is an agile project management framework used in software development. It emphasizes iterative and incremental development, allowing teams to adapt to changing requirements and deliver value to customers more effectively. Key components of Scrum include sprint planning, daily stand-up meetings, sprint reviews, and retrospectives.

67
Q

What is the distance between a point and a line?

A

The length of the line segment between the point and its projection on the line

68
Q

What is deltaTime

A

deltaTime is a Unity specific variable that represents the time interval in seconds it took from the last frame to the current frame. It measures the interval between the current frame and the last one.

69
Q

Order of Execution for Event Functions

A

First Scene Load
Editor
Before the first frame update
In between frames
Update order
Animation update loop
Rendering
Coroutines
When the object is destroyed
When quitting

https://docs.unity.cn/530/Documentation/Manual/ExecutionOrder.html

70
Q

What is the difference between static and dynamic batching in Unity?

A

Static batching combines static objects into a single draw call at build time, while dynamic batching combines small, moving objects at runtime. Static is for unmoving objects, dynamic for small moving ones. Example: Static batching for trees in a scene; dynamic batching for bullets fired from a gun. Both reduce draw calls, but static batching is done before runtime, while dynamic happens during gameplay.

71
Q

What is the use of occlusion culling in unity

A

Occlusion culling in Unity improves performance by hiding objects that aren’t visible to the camera. It’s like closing doors to rooms you’re not in to save electricity. For example, in a game level with a building, occlusion culling hides objects inside the building when the player is outside, speeding up rendering.

72
Q

Difference between “static class” and “singleton class”

A

A “static class” is a class that can’t be instantiated and is used for organizing related methods or variables. It’s like a toolbox with functions you can use without creating an instance of the class.

A “singleton class” is a class that allows only one instance to be created and provides a global point of access to that instance. It’s like a president of a club, where there’s only one president for the entire club.

Example:
A static class for math functions might have methods like Add(), Subtract(), etc., which you can call directly without creating an instance of the class.

A singleton class like GameManager ensures there’s only one instance of the GameManager in the game, allowing other parts of the game to easily access its functionality without creating multiple instances.

73
Q

What is the use of “yield” keyword

A

The “yield” keyword in Unity is used to pause the execution of a coroutine and then continue it from where it left off in the next frame or after a specified time delay. It’s like taking a break in a task and then returning to it later.

Example:
In a game, you might use “yield return new WaitForSeconds(2);” to pause the execution of a coroutine for 2 seconds before continuing. This is useful for creating delays or timed actions in gameplay, such as waiting for an animation to finish or implementing timed events.

74
Q

What is coroutine

A

A coroutine in Unity is a function that can suspend its execution to be continued later. It’s often used for tasks that need to happen over time, like animations, timers, or asynchronous operations.

Example:
In a game, you might use a coroutine to gradually move an object from one position to another, updating its position each frame. This allows for smooth animations or transitions without blocking the main thread.

75
Q

Describe invoke methods

A

Invoke methods in Unity are used to call a method after a specified delay or at regular intervals without needing to continuously check conditions in the Update loop.

Example:
Let’s say you want to spawn an enemy after a delay of 3 seconds. You can use Invoke to call a method that spawns the enemy after the specified delay. This allows you to handle delayed actions without needing to write custom logic for timing.

In Unity, there are primarily two types of invoke methods:

Invoke: This method is used to call a specified method after a delay. It’s commonly used for one-time delayed execution.

InvokeRepeating: This method is used to call a specified method repeatedly at a defined interval, starting after an initial delay. It’s commonly used for executing methods that need to be repeated at regular intervals, like updating the game state or checking for certain conditions periodically.

76
Q

Difference between Invoke and Coroutine

A

The main difference between Invoke and Coroutine in Unity is their usage and flexibility:

Invoke: Used for calling a method after a specified delay or at regular intervals. It’s simpler and easier to use for one-time delayed execution or repetitive actions with fixed intervals.

Example: Invoking a method to spawn an enemy after a delay of 3 seconds.

Coroutine: Used for more complex tasks that require asynchronous behavior, such as animations, timed events, or long-running operations. It provides more control over the execution flow, allowing pausing, resuming, or stopping the execution at specific points.

Example: Gradually moving an object from one position to another over time, with smooth transitions.

77
Q

What is Delegate and why we use it

A

A delegate in Unity is a type that represents references to methods with a particular signature. It’s used to pass methods as arguments or to define callback functions.

We use delegates in Unity to achieve loose coupling between components, allowing for more flexible and modular code.

Example:
Imagine you have a button in your game that performs different actions when clicked, depending on the situation. Instead of hardcoding these actions directly into the button, you can use delegates to define different methods for each action. Then, you assign the appropriate method to the button’s delegate based on the current situation. This allows the button to perform different actions dynamically without needing to know the details of each action

78
Q

What is 2.5D in Unity?

A

In Unity, 2.5D refers to a game that combines 2D and 3D elements to create a pseudo-3D effect. It typically involves using 3D models or effects in a 2D environment or restricting movement to two dimensions while allowing objects to appear three-dimensional.

Example:
Think of a side-scrolling game where the characters and environment are depicted in 2D, but certain elements like obstacles or effects are rendered in 3D to add depth and visual interest. This creates a sense of depth while maintaining the simplicity and gameplay mechanics of a 2D game.

79
Q

What is the difference between Unity 3D and 2D ?

A

Unity 3D is used for creating games with three-dimensional graphics, where objects can move freely in three-dimensional space. Unity 2D, on the other hand, is tailored for creating games with two-dimensional graphics, where movement typically occurs on a flat plane with no depth.

Example:
Imagine a game like Minecraft, where you can move in all directions, including up and down. That’s Unity 3D. Now, think of a classic side-scrolling game like Super Mario Bros., where characters move left and right on a flat surface without the ability to move into or out of the screen. That’s Unity 2D.

80
Q

What is State and Blend Tree

A

In Unity, a state is a condition or mode that an object or character can be in, which affects its behavior or appearance. A blend tree is a tool used to smoothly transition between different states, often used for animations.

Example:
Think of a character in a game that can be in different states like idle, walking, running, or jumping. Each of these states represents a different behavior or animation. A blend tree allows smooth transitions between these states, so the character can smoothly transition from walking to running, for instance, without sudden jumps or pauses in the animation.

81
Q

What is Trigger in animation

A

In Unity, a trigger in animation is a special type of parameter that can be used to activate transitions between animation states. It’s typically used to control when an animation transition occurs based on specific events or conditions in the game.

Example:
Imagine a character in a game that can perform different actions like attacking, jumping, or crouching. Each of these actions has its own animation. A trigger can be used to activate the transition from the idle state to the attack animation when the player presses the attack button. So, pressing the attack button triggers the attack animation to play.

82
Q

Explain inverse Kinematic animation (IK) in unity

A

Inverse Kinematics (IK) animation in Unity allows for more natural and realistic character movement by calculating the position and orientation of the character’s limbs based on a target position.

Example:
Imagine you have a character holding a tool. With IK animation, you can move the tool to a specific position in the game world, and the character’s arm will automatically adjust to reach and hold the tool realistically. This makes it easier to create natural interactions between characters and objects in the game.

83
Q

Explain World Space, Local Space, and Screen Space

A

World Space: Coordinates are relative to the world’s origin. Movement or rotation affects the object’s position or orientation in the global world.

Local Space: Coordinates are relative to the object’s own origin. Movement or rotation affects the object’s position or orientation based on its local axes.

Screen Space: Coordinates are relative to the screen’s dimensions. Often used for UI elements. The origin is at the bottom-left corner, with coordinates increasing as you move up and to the right.

Example:

World Space: Think of a GPS system where locations are referenced globally, regardless of where you are.

Local Space: Imagine a car turning its wheels. The rotation of the wheels is relative to the car’s own frame of reference.

Screen Space: Like placing stickers on a window from inside a car. The positions of the stickers are relative to the window’s dimensions, regardless of the car’s movement.

84
Q

Steps to optimize Unity Games

A

Profile Performance: Identify performance bottlenecks using Unity Profiler to understand where optimizations are needed.

Optimize Assets: Reduce asset sizes by compressing textures, optimizing models, and minimizing audio files.

Use Level of Detail (LOD): Implement LOD systems to reduce polygon count and texture resolution for distant objects.

Batching: Use static and dynamic batching to reduce draw calls and improve rendering performance.

Occlusion Culling: Hide objects that are not visible to the camera to reduce rendering overhead.

Texture Atlasing: Combine multiple textures into atlases (Sprite Atlas) to minimize draw calls and memory usage.

Code Optimization: Optimize scripts by reducing unnecessary calculations, avoiding excessive updates, and using efficient algorithms.

UI Optimization: Optimize UI elements by reducing the number of UI elements, using UI batching, and avoiding expensive UI effects.

Platform-specific Optimization: Adjust graphics settings and quality levels for different target platforms to optimize performance.

Regular Testing: Continuously test performance on target devices and iterate on optimizations to ensure smooth gameplay across different platforms.

85
Q

Can we load two scenes at once in hierarchy window?

A

A

Yes, you can load two scenes at once in the Hierarchy window in Unity by using the Multi-Scene Editing feature. To do this, open both scenes in the Editor, and they will appear simultaneously in the Hierarchy window.

B

Yes, Unity allows you to load multiple scenes simultaneously using additive scene loading. This is done with SceneManager.LoadScene(“SceneName”, LoadSceneMode.Additive);

86
Q

If a 3D object is moving, how can we add a Slow motion effect

A

To add a slow-motion effect in Unity, reduce the Time.timeScale value (e.g., Time.timeScale = 0.5f).

Note: Time.timeScale is also used to Pause/Resume the game (Time.timeScale = 0 (pause), Time.timeScale = 1 (play) )

87
Q

What is FPS Rate?

A

FPS (Frames Per Second) rate measures how many images (frames) are displayed per second in a video or game. Higher FPS results in smoother motion. For example, a movie typically runs at 24 FPS, while games aim for 60 FPS for smooth gameplay.

88
Q

If an animation is playing after using Time.timeScale = 0, What can we do to stop animation in pause state?

A

Set the animator’s speed property to 0 when pausing and restore it when unpausing. For example:

animator.speed = 0; // Pause
animator.speed = 1; // Resume

89
Q

How to change texture of a cube object at run time in unity?

A

To change the texture of a cube object at runtime in Unity, you would typically do the following:

Access the material of the cube.
Assign a new texture to the material.
Here’s a concise example:

// Assuming you have a reference to the cube’s Renderer component
Renderer cubeRenderer = cubeGameObject.GetComponent<Renderer>();</Renderer>

// Assuming you have a reference to the new texture
Texture newTexture = yourNewTexture;

// Assign the new texture to the cube’s material
cubeRenderer.material.mainTexture = newTexture;

89
Q

What does Monobehaviour in Unity?

A

MonoBehaviour is a base class in Unity from which all scripts derive. It allows objects to interact with Unity’s event system, like detecting collisions or updating every frame. Think of it as a template that lets game objects have behaviors and respond to game events.

90
Q

What is Kinematic Rigidbody in unity?

A

In Unity, a kinematic Rigidbody is a type of physics body that doesn’t respond to forces like gravity or collisions automatically. Instead, you control its movement through scripting.

For example, imagine a platform in a game that moves back and forth. You might use a kinematic Rigidbody to control its motion precisely, without interference from external forces like gravity.

90
Q

What is the use of Quaternion in Unity?

A

In Unity, a Quaternion represents rotations in 3D space. It’s used to store and manipulate rotations of GameObjects. Think of it as a compact way to describe an object’s orientation, like which way it’s facing.

For example, imagine you’re driving a car in a game. The car’s rotation (how it’s turned) is stored and manipulated using quaternions. They handle rotations smoothly and efficiently.

90
Q

What is Mesh in Unity?

A

In Unity, a mesh is a collection of vertices, edges, and faces that define the shape of a 3D object. It’s like the skin of an object, determining its visual appearance.

For example, imagine a cube in a game. The cube’s mesh defines its six faces, which give it its box-like appearance. Meshes are crucial for rendering objects in Unity’s 3D environment.

91
Q

Explain What is ECS?

A

ECS stands for Entity-Component-System. It’s a design pattern used in game development, including Unity, to manage entities (objects), components (features), and systems (logic).

Imagine a game character: the entity represents the character, components represent its features like health, speed, or appearance, and systems are the logic that manages how these components interact and behave.

ECS helps organize and optimize game code by separating data and behavior, making it easier to manage complex systems efficiently.

91
Q

What is LOD and use of LOD in Unity?

A

LOD stands for Level of Detail. In Unity, LOD is used to optimize performance by displaying simpler versions of objects when they are far away from the camera.

Think of LOD like a telescope adjusting its magnification based on the distance to the object you’re looking at. In a game, when an object is far away, you might swap it with a lower-detail version to reduce the number of polygons rendered, improving performance without sacrificing visual quality. This helps maintain a smooth gameplay experience, especially in large and complex scenes.

92
Q

What is New Input System?

A

The New Input System in Unity is a modern and flexible system for handling player input. It provides improved performance, extensibility, and ease of use compared to the old input system.

Think of it like a universal remote control for games. It allows developers to easily define and manage different types of input devices, like keyboards, controllers, or touch screens, and map their inputs to game actions, such as moving a character or firing a weapon. This makes it easier to create responsive and customizable controls for players.

93
Q

Difference between Materials and Physics Materials ?

A

Materials in Unity are used to define the visual properties of objects, such as color, texture, and shininess. They determine how an object looks when rendered in the game.

Physics Materials, on the other hand, are used to define physical properties of objects, such as friction and bounciness. They affect how objects interact with each other in the game world.

A common man example would be comparing a car’s paint (material) to its tires (physics material). The paint determines how the car looks, while the tires’ material affects how they grip the road and interact with other surfaces.

94
Q

What is Texture?

A

A texture in Unity is an image that is applied to the surface of a 3D object to give it color, pattern, or detail. It’s like wrapping a gift with decorative paper to make it look more interesting.

For example, imagine you have a plain cube in a game. Applying a texture to it could make it look like a wooden crate, a metal box, or a colorful patterned cube. Textures add visual richness and detail to game objects.

95
Q

What is Render Texture?

A

A Render Texture in Unity is a special type of texture that can be drawn to by the camera or other rendering processes. It captures the output of a camera and uses it as a texture on other objects.

Think of it like a live TV broadcast on a screen in a game. The camera captures the scene, and that live image is displayed on a TV screen within the game world. This is useful for effects like real-time reflections, video feeds, or portals.

96
Q

What is culling mask?

A

A culling mask in Unity is used to control which layers a camera can see and render. It allows you to selectively show or hide objects in the camera view based on their assigned layers.

Think of it like wearing special glasses that let you see only certain things. For example, you can set a culling mask so the camera only shows characters and hides the background, or vice versa. This helps optimize performance and manage what the player sees.

97
Q

What is the difference between projection perspective and orthographic?

A

In Unity, the difference between projection perspective and orthographic is how they render 3D objects:

Perspective Projection: Objects appear smaller as they get further from the camera, mimicking how human eyes perceive depth. Think of it like looking through a camera lens, where distant objects shrink.

Orthographic Projection: Objects maintain their size regardless of distance from the camera, with no sense of depth. It’s like looking at a blueprint or architectural drawing, where all dimensions are equal and there is no perspective distortion.

These projections are used based on the visual needs of the game or application.

98
Q

What is the use of post processing?

A

Post-processing in Unity refers to applying visual effects to the camera’s output after the scene is rendered. It enhances the final image with effects like bloom, color grading, and motion blur.

Think of it like editing a photo after taking it, adding filters to make it look more vibrant, cinematic, or stylized. This improves the visual quality and atmosphere of the game.

98
Q

What is Profiler?

A

The Profiler in Unity is a tool used to analyze and optimize the performance of your game. It provides detailed information about CPU usage, memory consumption, rendering times, and more.

Using the Profiler:

Open it via Window > Analysis > Profiler.
Play your game and observe the data it collects in real-time.
Think of it like a diagnostic tool for your car, showing you where performance issues are happening so you can make necessary adjustments for a smoother ride.

99
Q

What is NavMesh system? Explain it’s components.

A

The NavMesh system in Unity is used for AI pathfinding, allowing characters to navigate the game world.

Components of the NavMesh system:

NavMesh: Defines walkable areas.
NavMesh Agent: Attached to characters for navigation.
NavMesh Obstacle: Areas to avoid or navigate around.
Off-Mesh Link: Allows agents to navigate across gaps or jumps not covered by the NavMesh.
Think of Off-Mesh Links like bridges or tunnels in a GPS system, enabling navigation across areas that aren’t directly connected by roads.

100
Q

What is SerializedField?

A

In Unity, [SerializeField] is an attribute used to make private fields visible in the Inspector window. It allows you to access and modify private variables from the Unity Editor without making them public.

Think of it like opening a secret drawer with a key. You can hide your valuables (private variables) away, but still access them easily when you need to make adjustments. This makes it convenient for tweaking values and configurations without exposing everything to the public eye.

100
Q

What are Joints, how to use and how many do we have?

A

In Unity, joints are used to connect two rigidbodies, allowing for complex physical interactions like hinges, springs, or sliding mechanisms.

Types of Joints in Unity:

Hinge Joint: Allows rotation around one axis, like a door hinge.
Spring Joint: Connects objects with a spring force, like a car suspension.
Fixed Joint: Rigidly connects two objects, making them act as one.
Character Joint: Allows for more complex movements like those of a human limb.
Configurable Joint: Highly flexible, allows for customizing constraints and movements.
Distance Joint: Maintains a fixed distance between two objects.
Usage Example: To create a swinging door, you would use a Hinge Joint. Attach it to the door GameObject and connect it to the door frame. This allows the door to swing open and closed realistically.

Think of joints like different types of mechanical connections or hinges in the real world, enabling various types of movement and constraints between objects.

101
Q

What is JSON and what is it used for?

A

JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy for humans to read and write, and easy for machines to parse and generate. It’s commonly used for transferring data between a server and a web application, or between different parts of a program.

Think of JSON like a language for sharing information. Imagine you’re sending a message to a friend, and you want to include details about what you’re bringing to a picnic. You might write:

{
“items”: [“sandwiches”, “chips”, “soda”],
“location”: “park”,
“time”: “12:00 PM”
}
In this example, the JSON format organizes the information (items, location, time) in a way that’s easy for both humans and computers to understand. In Unity, JSON is often used for saving and loading game data, or for communicating with web APIs.

102
Q

What is Serializable?

A

In Unity, “Serializable” refers to the ability of a class to be converted into a format that can be stored or transmitted, like saving data to a file or sending it over a network.

Imagine you’re mailing a letter. To send it, you need to write your message on paper and put it into an envelope. The paper is like the serializable class - it can be written down and transported. Similarly, in Unity, a serializable class can be saved to a file or sent over a network.

102
Q

What’s the gotcha with accessing materials in script ?

A

The key gotcha with accessing materials in scripts is that changing a material property directly affects all objects using that material. This means changes can affect other objects unintentionally.

For example, if you change the color of a material used by multiple objects, all those objects will change color simultaneously. This can lead to unexpected behavior or unintended visual changes in your game.

To avoid this, it’s often better to create a new material instance for each object, allowing you to modify properties independently without affecting others.

103
Q

How would you solve the nested prefab problem (multiple solutions)

A

One solution to the nested prefab problem in Unity is to use ScriptableObjects or data-driven design:

ScriptableObjects: Instead of nesting prefabs, create ScriptableObjects to represent different configurations or variations of objects. These ScriptableObjects can hold references to prefabs and other data needed for instantiation. Then, your code can instantiate the appropriate prefabs based on the ScriptableObjects.

Data-driven design: Store object configurations in data files, such as JSON or XML. These files can define the relationships between objects and their variations without directly nesting prefabs. Your code can then read these data files and instantiate objects accordingly.

Both approaches decouple object configurations from prefabs, making it easier to manage and modify object variations without relying on nested prefabs.

104
Q

How is your understanding of algorithms and big O? Data structures? Can you implement basic data structures and sorting algorithms? Do you know common design patterns and how to implement them?

A

As a Unity Game Engineer, understanding algorithms and Big O notation is crucial for optimizing game performance. Data structures like arrays, lists, dictionaries, and queues are fundamental for managing game data efficiently. Implementing basic data structures and sorting algorithms such as bubble sort, insertion sort, or quicksort can improve game performance when handling large datasets.

Knowledge of common design patterns like singleton, observer, and factory patterns is valuable for creating scalable and maintainable game code. Implementing these patterns helps organize code logic effectively, making it easier to extend and debug game features.

105
Q

Difference between Update, Fixed Update, and Late update.

A

Update:

Function: Use for regular updates that need to run once per frame, such as player input handling, AI movement, or basic calculations.
Example: Updating the position of a moving character based on input and physics calculations.
FixedUpdate:

Function: Use for physics calculations and updates that are tied to the fixed timestep (e.g., rigidbody physics).
Example: Applying forces like gravity or movement calculations using FixedDeltaTime to ensure consistent physics behavior across different frame rates.
LateUpdate:

Function: Use for actions that should occur after all Update functions have been called. Useful for camera movement and following other objects.
Example: Adjusting the camera’s position to follow a player character after the character has been moved in Update.
Example Functions:

Update: void Update() { }

Player input processing
Non-physics movement updates
FixedUpdate: void FixedUpdate() { }

Physics-related updates (e.g., applying forces, movement calculations)
Using FixedDeltaTime for consistent physics behavior
LateUpdate: void LateUpdate() { }

Camera movement adjustments (e.g., following a target)
Executing after all Update functions

106
Q

Difference between OnTrigger and OnCollission

A

OnTrigger:

Use: Trigger events when GameObjects with colliders pass through each other without necessarily physically colliding (e.g., when one collider enters another’s trigger collider).
Example: Activating a power-up when a player enters its trigger collider.
OnCollision:

Use: Handle collision events between GameObjects with colliders that physically interact and collide with each other.
Example: Bouncing a ball off a wall when it physically collides with the wall’s collider.
Example Scenario:

OnTrigger: Use when you want to detect when one object enters another without causing a physical collision, such as collecting items or triggering events when passing through specific areas.

OnCollision: Use when you want to handle interactions that result from physical collisions between objects, like bouncing off surfaces or triggering destruction effects upon impact.

In summary, OnTrigger is for trigger interactions (passing through), while OnCollision is for physical collisions (bumping into).

107
Q

Difference between Awake and Start

A

Awake:

Use: Called when a script instance is initialized, typically before the Start method. Use it for initializing variables or setting up references.
Example: Setting up references to other components or initializing values needed before the game starts.
Start:

Use: Called after Awake, but before the first Update method. Use it for actions that need to be performed once at the beginning of a script’s lifetime.
Example: Initializing gameplay mechanics like spawning enemies or setting up initial game state.
Example Usage:

Awake: Use to initialize references to other components or set up variables that will be used throughout the script’s lifetime.

Start: Use to perform one-time setup tasks or start actions that should occur once when the script or game object is enabled.

In summary, Awake is for initialization and setting up references, while Start is for beginning-of-game tasks and one-time setups.

108
Q

Difference between Coroutine and Invoke, and Invoke Repeating

A

Coroutine:

Use: Use for executing a sequence of instructions over multiple frames with explicit control over timing using yield statements (e.g., WaitForSeconds).
Example: A countdown timer that updates every second and performs an action when the countdown ends.

IEnumerator CountdownCoroutine(float countdownTime)
{
while (countdownTime > 0)
{
yield return new WaitForSeconds(1f);
countdownTime–;
Debug.Log(“Time left: “ + countdownTime);
}
Debug.Log(“Countdown finished!”);
// Perform action after countdown ends
}

Invoke:

Use: Use for calling a method after a specified delay, executing it once.
Example: Triggering a method to play a sound after a delay.

void Start()
{
Invoke(“PlaySound”, 2f); // Invoke PlaySound after 2 seconds
}

void PlaySound()
{
// Play sound logic here
}

InvokeRepeating:

Use: Use for repeatedly calling a method at a fixed interval, starting after an initial delay.
Example: Repeating an enemy’s movement update every 0.5 seconds.

void Start()
{
InvokeRepeating(“MoveEnemy”, 1f, 0.5f); // Invoke MoveEnemy every 0.5 seconds, starting after 1 second
}

void MoveEnemy()
{
// Enemy movement logic here
}

Summary:

Coroutine: For executing tasks over time with explicit control over timing and execution flow.

Invoke: For calling a method once after a specified delay.

InvokeRepeating: For repeatedly calling a method at a fixed interval, starting after an initial delay.

Choose Coroutine when you need more complex timing or control over execution flow across frames. Use Invoke for simple delayed method calls and InvokeRepeating for repeated method calls at regular intervals.

109
Q

In what order do fundamental constructs of C# script appear?

A

namespaces, class declaration, variables, functions

110
Q

fundamental constructs used to organize and structure code

A

Namespaces: Namespaces are used to organize code into logical groups and to avoid naming conflicts. They help in organizing classes, interfaces, structs, enums, and delegates into a hierarchical structure.

Class Declaration: A class is a blueprint for creating objects (instances) which define its properties (variables) and behaviors (methods). The class declaration defines the structure and behavior of objects of that class.

Variables: Variables are containers used to store data values. They have a type and a name and can hold different types of data, such as numbers, text, or objects.

Functions (or Methods): Functions (or methods) are blocks of code that perform a specific task. They are defined within classes and are used to define the behavior of objects of that class.