- Coding Videos/
- Mobile /
- ARTtech seminar: Timo Heinäpurola: Mobile Game Engine Development at Reforged Studios
ARTtech seminar: Timo Heinäpurola: Mobile Game Engine Development at Reforged Studios

Download Video link >
ARTtech seminar: Timo Heinäpurola: Mobile Game Engine Development at Reforged Studios
Assembly Summer 2016 seminar presentation.
In this talk I present how we develop our custom mobile game engine at Reforged Studios. I will outline the reasons why we decided to start developing our own game engine instead of licensing one. The rest of the talk will focus on the principles governing our development process and software architecture. This includes methods such as data oriented design, the KISS principle, platform independent build definition and automated testing.
Title: Mobile Game Engine Development at Reforged Studios
Author: Timo Heinäpurola
source
View Comments source >
Transcript view all >
00:00 hi everyone our next speaker is Timo
00:03 haina portola from Riyadh studios where
00:06 he works as senior engineer and he's
00:08 going to talk about mobile game engine
00:10 development at the studio welcome
00:21 so hello everyone and welcome to this
00:25 talk about mobile game engine
00:27 development of reefer studios so I'm
00:30 going to start off with the short
00:31 introduction of the company and myself
00:33 followed by why we actually start
00:36 developing custom engine in the first
00:37 place that's actually a question that I
00:40 hear quite a lot and the majority of the
00:45 presentation will actually be about
00:46 in-house engine development principles
00:47 and processes so to start off the
00:51 company was found in 2015 so a bit over
00:55 a year ago in helsinki finland by for
00:57 industry veterans so people who've been
01:00 developing games such as league of
01:01 legends wool of warcraft Need for Speed
01:03 a long list of really high-profile
01:05 titles actually and as a company were
01:07 focused on mobile games of almond and
01:09 creating unique wells so my name is
01:12 dimma Hannibal I'm basically a game
01:15 engine enthusiast you could say I
01:17 started programming when I was about 11
01:18 started working on engines when I was 14
01:20 so I've been doing this for pretty much
01:22 as long as I remember and still having
01:24 tons of fun I actually started in the IT
01:28 industry so my industry experience comes
01:31 from there originally I did that that
01:34 for five years but during those five
01:36 years I had this burn to actually be in
01:39 the game development industry so I
01:41 started doing some indie development in
01:42 addition to basically hobby development
01:45 as well so at least one game didn't
01:48 really make that well develop it with my
01:50 own 3d game engine though so I ended up
01:53 moving to bugbear entertainment where I
01:56 initially started working on rich racer
01:58 drift topia did some work worked there
02:01 but my main priority was actually
02:03 working on next car game wreck fest as
02:06 the lead programmer towards the end of
02:09 the project then after that moved to
02:12 next games started working as the lead
02:14 programmer on the walking dead
02:16 no-man's-land and also did some work on
02:19 compass Pointe West so enamored reforge
02:23 and my responsibility is basically the
02:26 in-house engine development process so
02:30 this basically means that I maintain the
02:32 vision of the engine but I also do most
02:34 of its Duvall
02:35 so we actually have a very flat
02:37 hierarchy at the company so people are
02:39 working in different disciplines or
02:41 different areas of the game and the
02:43 engine but my responsibilities made it
02:46 to basically make sure that everyone has
02:48 all the information that they actually
02:49 need to work on features on there so why
02:54 did we start developing custom engine in
02:56 the first place we're basically building
02:58 an army metalurh so this means that we
03:00 have a large number of units and effects
03:03 on the scene at the same time and a
03:05 solution that actually many companies do
03:08 or take is that they use billboards for
03:10 for all the characters to basically cut
03:13 down on the rendering time but we didn't
03:15 want to do that we wanted to create a
03:17 more vivid and more complex environment
03:19 with complex animations and effects so
03:21 this basically led to very specific
03:23 rendering requirements so as us as a
03:28 small thought game here imagine that
03:30 this bush here we present us developing
03:32 our game and the task at hand is to
03:35 actually penetrate this bush with the
03:37 right kind of tool so which tool should
03:39 we choose if we have two options the
03:41 machete or the Swiss Army knife both
03:44 basically achieve the same task but
03:47 there's differences in how well to
03:48 actually do that so as you might have
03:51 already guessed a Swiss Army knife here
03:53 basically we present engine products so
03:57 for us basically we had relatively or we
04:01 have very specific needs for the render
04:02 as already mentioned so we would have
04:05 basically needed to do quite a lot of
04:07 modifications to the engine but this
04:09 meant that we would have basically
04:10 needed a full engineering team to really
04:14 take control of the engine and to
04:16 actually modify and we only needed a
04:19 relatively small set of these features
04:21 because there's a lot of features in
04:23 existing engines that are targeted
04:25 towards different kinds of games and we
04:27 didn't need those and these tools that
04:32 we could have basically used we're not
04:33 really a perfect fit for us either so we
04:36 started looking at custom engines custom
04:39 engines basically it could be simple and
04:41 efficient they could target is very
04:43 specific need that we had they would
04:45 also be a lot easier to modify and fix
04:49 because we have the historical knowledge
04:50 we actually understand how the engine
04:52 was developed and how it basically works
04:55 and we only need of the blade so we
05:00 needed a very specific tool for a very
05:02 specific kind of game so now mr. looking
05:07 at how we actually do this thing so
05:10 first of all I'm going to go through a
05:11 couple of engine development pillars
05:13 players that are pretty much underneath
05:15 anything that we do on the engine so
05:19 first of all we have strict resource
05:20 management then we have lean and clean
05:21 code and multi-platform development I'm
05:24 going to go through each of these in a
05:25 bit more detail now so to start off
05:29 strict resource management what this
05:32 basically means for us is that every
05:34 system is basically responsible for
05:36 managing their own resources on mobile
05:39 platforms it's really important to
05:41 remember that the amount of resources
05:43 that you can actually use is typically
05:46 not very near to how much resources the
05:49 device theoretically has so basically we
05:56 need to be really careful with how much
05:58 and how we actually use those resources
06:02 there are a couple of types of resources
06:05 everyone basically knows this but the
06:07 first that I'm going to cover here is
06:09 CPU time so on mobile one thing that we
06:12 really need to take into account is that
06:14 it's not all about the frame rate and
06:17 how much like the percentage of how much
06:21 you actually use from the processing
06:23 power you need to look at the energy of
06:25 how much you're actually using so how
06:28 much how much energy your process is
06:30 basil basically using when it's
06:32 computing stuff so basically if you
06:36 reach 60 frames per second with ninety
06:38 percent processor utilization that's
06:40 probably not a good idea because you're
06:42 going to be eating through your battery
06:44 pretty quick so the second type of
06:49 resource is memory so for us basically
06:54 all allocations or remember all memory
06:56 allocations actually go through our
06:59 custom memory manager so this basically
07:02 allows us to keep really good track of
07:04 how much memory were actually using and
07:06 where we're doing the allocations the
07:09 memory manager also provides us with a
07:11 lot of utilities and tools that help us
07:13 you know do this tracking and actually
07:16 optimize our memory usage we also
07:20 encourage all systems to do pre
07:23 allocation as much as possible so when
07:27 initializing a system we try to
07:29 pre-allocate as much of the memory that
07:32 we actually need for processing and then
07:34 of course d initialize or free that
07:36 memory when the system is they
07:37 initialize the memory manager actually
07:41 supports pooling pooling is basically
07:44 the idea of you know grouping memory
07:46 allocations depending on how you're
07:49 using it were you using it and we have
07:51 two types of polls actually we have
07:53 static polling and we have dynamic
07:55 polling so dynamic polling typically
07:59 allocates from the heap and first you
08:02 can create your own heaps but the idea
08:04 is that it is it is for dynamic memory
08:06 usage where you don't necessarily know
08:08 how much memory actually going to need
08:10 static pooling on the other hand
08:12 basically allocates a block of memory
08:14 that we can then allocate our actual
08:17 objects or the data that we actually
08:19 need from and these can form hierarchies
08:22 so this way we can basically more easily
08:25 group our memory allocations based on
08:28 system and based on usage then we have
08:33 allocators which are basically the
08:35 primary interface for allocating
08:38 allocating memory basically so we have
08:40 first of all we have scratch allocators
08:42 which are really good for things like
08:45 per frame allocation where we actually
08:47 need to allocate some memory for the
08:49 frame operations but we don't need to
08:51 persist those allocations so we can
08:54 basically flush the buffers after after
08:57 we're done with the frame then we have
09:00 object allocators which are really good
09:03 for recycling memory for objects so we
09:05 know the size of the allocations and we
09:07 actually have built on top of this
09:09 low-level object allocator system we
09:12 have a
09:13 macros and templates that are basically
09:14 there to enable constructor and
09:17 destructor logic to build to be built on
09:19 top of that so next up we have leaning
09:24 clean code as the second pillar here so
09:27 what we aint aim to aim to do basically
09:30 in the engine is keep the code base as
09:33 lean and clean as possible so we have a
09:36 number of things number of items here
09:38 that help us with that or that we follow
09:40 to basically reach this goal so we have
09:44 Orthodox C++ small interchangeable
09:46 elements self commenting code upfront
09:48 complexity and finally data oriented
09:50 design bitter in the design is a very
09:54 important topic and I'm going to use a
09:55 bit more time to talk about that once we
09:57 get there so first off object for
10:02 orthodoxy plus what so what Arthur doc
10:05 C++ basically means is that we trying to
10:09 use them we try and minimize the number
10:12 of C++ features that we actually use so
10:15 we focus only on the features that
10:16 really provide value to us so we don't
10:19 use C++ features just because they are
10:21 there just because they're cool but
10:24 instead we use them only because we
10:26 really need so things that we don't
10:30 actually use real-time type information
10:34 of runtime type information exceptions
10:36 sequel ha streams and we all the only
10:39 use meta programming in moderation so
10:42 very conservative current server they
10:44 blew all our code is is written or all
10:48 the engine code is written in C style
10:51 interfaces so we use sea salt interfaces
10:53 there this basically means to two major
10:56 things so we have all pake structures
10:58 and we have function of function naming
11:01 certain types of function naming so to
11:03 explain these basically you can see on
11:05 the right hand side image here at the
11:08 very top you can see the struct entity
11:10 which basically is just the forward
11:12 declaration of that there's an entity
11:14 somewhere in there but we're not talking
11:17 about what it actually means or what
11:20 kind of data it contains and then we
11:23 have a number of G that's not really
11:25 readable but
11:26 anyway we have a number of functions
11:29 that actually just take pointers to that
11:32 object and those are actually the
11:34 operations of the system object creation
11:38 all also goes through functions so this
11:42 basically leads to explicit very file
11:44 object life cycles so the system can
11:48 track which objects have actually been
11:51 created when they're actually freed if
11:54 they are free to when the system is they
11:56 initialized and all of that and all
12:02 implementations are actually contained
12:04 in the cpp files so we don't really
12:08 include a lot of functionality in as in
12:11 line methods or as we don't write
12:14 classes that have a lot of you know
12:16 fields in there in the header actually
12:19 define all the internals in there so
12:21 this basically leads to the fact that if
12:25 we actually change the internals of the
12:27 system add new fields modify
12:30 functionality this doesn't propagate
12:31 into the compilation time of the system
12:33 so that is the very clear reason why we
12:36 actually do that it also makes the code
12:38 interface a lot simpler and a lot easier
12:41 to read so small interchangeable
12:48 elements basically we try and write all
12:52 our code in small interchangeable
12:54 elements so small functions and macros
12:57 instead of huge monolithic ones and the
13:01 idea is basically is that we do that
13:03 upfront we basically minimize the amount
13:06 of refactoring that we actually need to
13:08 do when we need to you know add new add
13:11 new functionality using that
13:13 functionality so it basically helps us
13:15 in doing that and it also improves the
13:17 readability of the code and lowers the
13:20 barrier of entry for people who haven't
13:22 really been working on the code before
13:24 so it's easy to read and it kind of
13:27 automatically documents the code so this
13:30 actually brings me to my next point
13:32 which is self commenting code this is
13:35 something that's been this got discussed
13:37 quite a lot and P some people believe
13:39 that you
13:40 should comment a lot some people that
13:42 you know you should explain everything
13:43 in code and we're kind of on the middle
13:45 ground here but we prefer function and
13:48 variable naming over commenting so
13:51 instead of writing really long comments
13:53 we try and explain everything using code
13:55 itself this isn't of course always
13:57 possible what we but we strive to do
13:59 that the benefit is really that code
14:03 documentation is automatically
14:04 maintained for the most part at least so
14:10 often complexity basically a lot of code
14:16 tends to be relatively complicated of
14:18 course in game game development and you
14:20 get it you get into functionality or you
14:22 get into functions that might be really
14:25 really complex terex to actually
14:27 understand and then you have these quite
14:30 often you write these really simple
14:33 interfaces that you know don't really
14:37 tell you what the function actually does
14:39 so that is one thing that we don't do
14:42 either we try and write our interfaces
14:44 so that you can actually understand this
14:47 this functionality is going to be really
14:49 complex it's going to be really
14:50 expensive so from the point of you know
14:55 from the time that you actually start
14:56 using the functionality you immediately
14:58 see that this is going to be really
15:00 expensive say try and not use that
15:03 because a friend actual tell me once
15:06 that if you see magic in code it's
15:09 probably just a really dirty tricks
15:10 there's no such thing as magic in code
15:12 so you should get immediately wary if
15:14 when you do that as an example here a
15:19 real life example actually if you we
15:23 have we had a get distance field
15:24 function which basically you know the
15:29 name implies it basically returns a
15:30 distance field okay but it was actually
15:33 computing this distance field so we had
15:35 occasional frames where we actually got
15:37 this method called and we have glitches
15:40 on those frames but it was really hard
15:42 to actually find because it was an
15:43 occasional friends so that is that it's
15:46 one thing for instance you should just
15:47 buy a small change just call it
15:50 calculate distance field that
15:52 communicates that it's actually going to
15:54 be
15:54 more expensive so now basically two I
16:01 guess my favorite topic in a way which
16:04 is data oriented design so what data are
16:08 into design is basically about is that
16:10 everything is regarded as transformation
16:13 of data from one form to another it
16:16 might be creating new data modifying
16:18 existing data or creating new data but
16:21 it's still transformation so basically
16:24 every problem is regarded as a data
16:27 problem and this means that we don't try
16:30 to express these really complicated
16:33 structures and communication between
16:35 these different concepts in the real
16:38 world we don't try to implement that in
16:40 the code itself but instead we look at
16:43 what the actual data is that defines
16:45 that concept and what the processes are
16:48 that operate on that we also separate
16:52 function from data so this means that
16:55 basically different processes can
16:57 operate on the same data and we can
17:00 build more complex operations that are
17:02 actually easier to understand because we
17:04 don't need to have all that glue between
17:06 the different objects and concepts one
17:10 real really core concept of data or into
17:12 design is that all applications are
17:14 actually run on physical hardware real
17:17 tangible physical hardware not some
17:20 academic daydream basically but on real
17:24 hardware so this thinking basically
17:27 allows us to organize the data in such a
17:30 way that is optimal for the different
17:33 target systems organization of the data
17:37 of course depends on how we actually use
17:39 the data and this is why it's super
17:41 important that we actually understand
17:43 our problem really understand the
17:45 problem and the data not just the
17:48 interfaces between the object what we
17:50 actually need to do to achieve the
17:52 results so optimization for us of course
17:57 is really important and to understand
17:59 why the data layout is so important on
18:02 CPUs we need to go through quickly how
18:07 memory is basically structured so again
18:10 this might be something that you know
18:11 people have pretty good knowledge about
18:14 but I'm still going to go through that
18:15 to basically make sure that we're on the
18:17 same page here so basically we memory is
18:20 built on hierarchy you have registers
18:24 which are the closest to the AL use
18:27 their the workhorse of computations
18:29 basically then you have a number of
18:31 caches l1 l2 l3 you could have an elf
18:34 for as well and these are really fast
18:36 blocks of memory that are actually on
18:38 the CPU chip itself and then finally you
18:42 have deer on which is the slowest type
18:44 of memory that we're going to be talking
18:46 about here of course we have disk we
18:47 have DVD and so on so when accessing
18:52 memory and updating the ballys in the
18:54 registers the CPU actually first checks
18:57 if the address can be found in the cache
18:59 in a cache which cash depends on the
19:04 implementation so not finding in a dress
19:09 and the cache is called a cache miss and
19:11 if a couch Mase happens then the
19:13 processor will basically have to go do a
19:15 round trip to the DRAM an issue a memory
19:18 read why this is so important is that
19:22 the DRAM is about 20 times slower than
19:25 the l2 cache and about 200 times slower
19:28 than the l1 cache these are empirical
19:31 values not you know to specific
19:34 architecture though but roughly to get
19:37 the scale of things and add it to that
19:41 caches are actually updated than bursts
19:44 so full cache lines are read at an ever
19:47 every time basically a cache miss
19:49 happens and when the D rom needs to be
19:53 accessed so how we actually utilize this
20:00 we can actually utilize the cache
20:01 updates by basically grouping data that
20:06 is used together group it together group
20:09 it basically so that it's close to each
20:11 other this basically allows for memory
20:17 reads to read more data at any time more
20:20 important data and results in fewer
20:22 cache misses overall also we try and
20:27 process as much of the same type of data
20:32 objects at the same time with the same
20:35 code because this basically also
20:38 improves the instruction cache
20:39 utilization because instructions that
20:43 the processor actually execute our data
20:45 as well so they're also read through
20:47 caches and as already kind of mentioned
20:52 ketchup states will also fetch the
20:54 vicinity of the address there so they
20:56 will fetch important data if we have all
20:58 that data laid out consecutively in
21:01 memory actually have a couple of
21:03 examples here about data oriented design
21:06 overall coming up so the pros and cons
21:13 first of all it's actually really easy
21:15 to unit test data on to design
21:18 applications designed that way because
21:20 you have the data readily accessible
21:21 it's not hidden behind some arcane
21:24 objects it's basically easy to verify
21:27 that the data is actually correct it's a
21:30 lot simpler to refactor because you have
21:32 sets of data you have processes
21:35 operating on that data so you don't need
21:38 to handle these dependencies between the
21:40 objects create intermediate objects just
21:43 because you need to handle a certain
21:45 dependency and it's also really hardware
21:48 friendly which is really good for an
21:51 application cons basically the major
21:54 Connel varies mentioning here is that it
21:57 does require a paradigm shift so if
21:59 you're used to object-oriented
22:00 programming you might be in for a
22:03 surprise so finally the third pillar is
22:08 multiplied from development so basically
22:13 the you know the game that we're
22:16 actually making needs to run on multiple
22:18 platforms we have development platforms
22:21 when as Windows Linux OS X and we have a
22:25 couple of target platforms which is like
22:27 iOS and Android and actually also
22:30 linux is also a target platform because
22:31 our server environments use that so how
22:35 do we actually manage this whole thing
22:37 well basically will you see make see
22:41 make is basically a tool that executes a
22:43 certain kind of script that defines the
22:46 projects in the platform independent
22:48 manner and once he make then does is
22:51 that it generates the platform dependent
22:53 versions of these projects and this
22:58 basically allows us to not care too much
23:00 about you know of course we need to care
23:03 on the programs that we need to care
23:04 about what kind of code we ride for the
23:06 different platforms but we don't need to
23:08 make changes into all of the different
23:10 projects to just make it work on each
23:11 plunger a platform will support it early
23:15 and now we're basically is keeping it
23:16 working and we're actually helped here
23:19 by continuous integration and automated
23:23 testing currently automated testing
23:25 actually means unit testing but we're
23:27 also planning on a wider ranging
23:29 integration testing framework so how do
23:33 these things then actually fit into the
23:35 real world well here's a case study of
23:39 how we develop the effect system itself
23:42 so it all started with a need basically
23:45 we already have a particle system so we
23:49 had a system for rendering and spawning
23:52 particles but what we didn't have was a
23:55 system for game code to actually trigger
23:57 the effects and we also wanted to have
23:59 tools and a way for artists who actually
24:02 create effects easily and with minimal
24:05 engineering effort and also effects can
24:09 be wildly complicated they can have like
24:11 multiple meshes lights multiple particle
24:13 limiters and all that so we didn't have
24:15 these kinds of structures so then we
24:18 asked ourselves a lot of questions so
24:22 what kind of effects would we have how
24:25 game code actually like to control these
24:27 effects and finally how would artists
24:30 really want to create these effects etc
24:34 so we actually believe in long
24:36 discussions so we we have quite a lot of
24:38 discussions between the engineers about
24:40 different different areas of the engine
24:43 and
24:43 of the game code itself and this is
24:46 basically something that we do to get
24:48 everyone aligned with with the problem
24:51 domain and can understand what we're
24:53 actually doing here so overall this
24:56 leads to autonomous working better
24:58 autonomous working because we know that
25:00 everyone has an more or less same ish
25:02 understanding of that of the area so
25:07 when we were finding answers to these to
25:11 these questions we noted a couple of
25:13 things that we needed to take into
25:14 account for the when designing the
25:17 architecture so first of all needed to
25:19 be super easy to extend it needs it to
25:22 be easy to use from game code and it
25:24 also needed to quite naturally have
25:27 really good performance so what we
25:31 basically ended up with the front face
25:34 of the system is called the driver
25:36 system and in this driver system
25:38 basically the game code actually knows
25:40 only of a single data structure which is
25:43 returned to it when the effect is
25:45 created this also have act as access the
25:48 handle to the effect game code then
25:51 updates properties in this data
25:52 structure after this the driver is run
25:57 on this data once per frame and it
26:00 updates the actual effect based on this
26:02 data so only the driver actually needs
26:06 to know about the details of how the
26:08 effect itself is built or small parts of
26:12 the details which I'm going to be
26:13 talking about in a bit the engine also
26:16 supports a number of a number of default
26:18 drivers but game code can also implement
26:21 their own ones if they want to so
26:25 effects are basically build up of
26:27 components they're basically just a
26:29 collection of components and this system
26:35 is actually built on top of the same
26:37 system that anything else in the scene
26:39 so basically we can attach anything into
26:41 an effect we can attach a model lights
26:44 whatever basically is in the scene so it
26:48 adds a lot of a lot of flexibility and
26:50 it's data-driven so all of this data can
26:53 actually be loaded from configuration
26:56 files
26:56 configurations and setups of the
26:58 components so what is this entity
27:02 component system then well first of all
27:07 it's an example of data or interdesign
27:10 so in an entity system entities are
27:13 actually just the sums of their
27:15 components a component defines a certain
27:20 set of data that defines parts of the
27:23 entity and we also have component
27:26 systems that operate on this data and
27:29 basically this all-in-all defines what
27:33 the entity itself is so we don't define
27:37 these the concept actually encode but
27:40 instead we implement the concepts
27:41 themselves each component is also
27:47 associated with one system for doing the
27:50 crucial operations of the component so
27:53 each system basically defines multiple
27:55 processes these could be updating could
27:58 be our rendering etc and this overall
28:04 this allows for optimizations on a
28:06 system versus by a system basis based on
28:09 how the processes are actually using the
28:11 data itself but all components are still
28:17 accessible and modifiable globally so
28:19 this seemingly tying the code to the
28:23 data is only for the more processing
28:27 intensive stuff so each component is
28:32 basically variable globally so we can
28:35 basically link components to each other
28:39 through their unique IDs which their
28:42 associated when the components loaded so
28:45 here's a little code this isn't this
28:48 isn't the engine code but it's basically
28:49 it has the same principles um so here
28:54 you can see the header file on the
28:55 left-hand side yeah it's readable good
28:58 so on the left-hand side you can see the
29:00 header file which is basically very very
29:02 simple again this reflects what I showed
29:05 earlier on simple C style code
29:08 so we have the forward declared
29:10 transform which is the component data
29:13 and then we have a single function you
29:16 could have a bunch of other functions of
29:18 course here but this is the most
29:19 important for this this use case so it
29:22 finds the component based on the ID of
29:24 the component so on the implementation
29:27 side here you can see we have actually
29:30 defined the structure so it contains a
29:31 position and rotation then we have a
29:33 number of vectors we have one for the
29:36 transforms 14 world matrices and one for
29:40 the transform component IDs and then we
29:44 have transformed update this is
29:47 basically a process of the trial of the
29:50 component system so this is called by
29:53 the entity system when we need to update
29:54 all different components and then we
29:59 have the other function here which
30:00 basically just loop through loops
30:01 through the ids and it returns the
30:04 correct transform so looking at the
30:07 implementations here it's important to
30:09 note that the first function actually
30:11 only accesses the position rotation
30:13 information and writing out the world
30:15 matrix so it makes sense to actually
30:18 have the position and rotation
30:19 information close to each other because
30:21 that's what we're operating on so that's
30:24 what we have the transform component
30:25 there and then in the find you can see
30:29 that we only are accessing the IDS of
30:32 the components so we don't really need
30:35 to access the data stored in the actual
30:37 transform components so we can basically
30:39 just iterate through all the ideas in
30:43 consecutive memory keeping the cache
30:45 odds and then finding the correct ID and
30:47 returning a pointer to the same
30:49 transform at the specific index so this
30:53 basically allows us to have different
30:55 use case at different layouts for memory
30:58 depending on how we're actually using it
31:00 and as a final note on this slide the
31:04 transform find function isn't really
31:06 very safe some of you probably might
31:07 have noted that and this isn't actually
31:09 something we're going to be doing in
31:11 production it's for demoing purposes in
31:15 production we actually have quite a lot
31:17 of you know complicated structures for
31:19 optimizing the different layouts
31:21 specifically in this run
31:22 form component actual so with that in
31:26 mind we create a simple effect we had
31:30 just has a two components basically we
31:32 have a transform component and we have a
31:34 particle emitter component and these are
31:36 linked to each other so the particle
31:38 component knows about the transform
31:41 component then we have the game code
31:44 modifying the data that is provided to
31:46 it when the effect is created the driver
31:49 is being run on that data and it
31:51 basically updates the transform
31:53 component so it only needs to update
31:54 your transform component where where the
31:56 effect is actually going to be the
32:01 particle emitter can now retrieve the
32:03 position rotational information from the
32:05 transform the snowing where to spawn the
32:07 particles and at which orientation so to
32:13 summarize the architecture it is
32:15 data-driven so artists can basically
32:18 create effects by creating and creating
32:22 a file and effect file basically a JSON
32:24 file and of course we have tools we can
32:27 we can build all kinds of tools on top
32:29 on top of this then we have a very
32:32 simple interface for actually using the
32:36 effects from game code and the effects
32:40 are built in a componentized manner so
32:43 that we can we can basically extend the
32:46 system really easily by creating new
32:48 components so the system is also
32:52 performance critical which in our case
32:55 actually means multiple simple effects
32:59 versus a few complicated effects meaning
33:02 again that we need to be able to fosston
33:05 we need to be able to spawn new effects
33:08 really fast and the way we were actually
33:11 doing this is that when we're first
33:13 encountering a temblar or an effect
33:18 template we create we create a binary
33:21 representation of that template so that
33:24 creating a new effect actually just
33:26 becomes pretty much a memory copy
33:32 naturally updating and rendering also
33:35 needs to be super fast
33:36 so I still have time for a example here
33:40 of another example of data oriented
33:42 design actually which is particle
33:46 emitter component optimization
33:48 specifically updating specific
33:50 properties so let's say that for this
33:54 example we have a particle that contains
33:56 position velocity rotation and color
33:58 information and we have multiple
34:03 particles being emitted by this with one
34:05 emitter and then we have multiple
34:07 limiters a very typical example
34:11 operators then actually touch only a few
34:14 properties at a time so first looking at
34:18 a naive implementation we would have
34:21 them defined in a single structure each
34:23 of those properties and multiple
34:25 instances of that structure and let's
34:27 say a cache line for this example here
34:30 contain could contain only a single
34:33 structure at a time so the component now
34:38 iterates through each of these particles
34:39 but only the position and velocity are
34:41 actually touched so the rotation color
34:45 information is actually unnecessarily in
34:47 the cache line so what we did is again
34:51 we looked at how we actually use the
34:53 data and we group that data based on
34:56 that so instead of having them all the
35:00 properties in the same structure we
35:02 actually have them laid out property by
35:04 property consecutively memory so when
35:07 we're accessing individual particles
35:10 we're accessing individual indices at
35:12 the Indies arrays but at every index
35:16 we're actually reading in important data
35:18 for the next particles that we're going
35:21 to be processing so that basically
35:25 summarizes it um we chose a custom
35:29 engine for reasons of performance and
35:31 control we also keep a really tight
35:34 focus on how we actually a what we
35:37 actually do on the engine and this
35:39 basically allows us to keep the team
35:41 small and nimble I also show basic
35:45 pillars of how we develop the engine
35:48 an effect system case study for how we
35:51 actually do this thing in the real world
35:56 so here's the we're hiring slide so if
36:01 you're interested please go and check
36:03 out our web page and also here's here's
36:08 some personal data as well have a
36:10 Twitter account so if you are interested
36:13 in some tidbits of tech information go
36:16 follow me and then also my facebook
36:19 public facebook page for that I use a
36:21 son kind of a blog at you small small
36:24 block post in as well so that's it thank
36:28 you and I think it's time for questions
36:41 you
36:51 hi you talked a lot about a lot of code
36:55 cleanliness and the performance of it my
36:59 question basically would be the day that
37:01 what do you do when these two sort of
37:04 end up clashing and an example that I
37:06 used here is this is use of vector
37:08 instructions you can do a lot of really
37:10 powerful stuff by like embedding a bit
37:12 of assembly code into into your code
37:14 with a bit of if deaths or easier it's
37:18 your cleanliness basically force you to
37:20 but it just sorta like structure the
37:22 data data so that the compilers
37:24 optimizer will do it for you yeah so
37:28 basically the question is that what do
37:30 we actually do if different principles
37:33 here clash and that we you know we
37:35 actually are kind of forced to optimize
37:38 more right a bit dirty code or yeah that
37:42 I do view or is like hard headed towards
37:45 that okay well if you're gonna have to
37:47 rely on the on the compiler to do that
37:50 to do that or and we'll do it or we'll
37:54 do that okay well no we're not going to
37:56 be completely hard-headed about this
37:57 we're gonna do do a bit of at a bit of
38:00 assembly code there so it looks a bit
38:02 ugly but it produces better results yeah
38:05 yeah we're so yeah basically we use
38:08 common sense so we're not hard headed at
38:11 things and one actually one point in
38:14 data or in the design itself is that we
38:15 really need to understand the problem
38:17 and if the problem is something that we
38:19 need to do some tricks with and then
38:22 we're going to do that and but the thing
38:24 is that we try and keep the code still
38:26 like understand as understandable as
38:28 possible but if we need to go a bit
38:30 further in optimization and do some
38:33 really tricky stuff then you know that's
38:36 just what we need to do I mean the
38:37 product is still the most important
38:39 thing but the cleanliness of the code is
38:41 more for arm you know it's it's a
38:46 guideline it's a it's how we what we
38:48 strive for so we're not too afraid to
38:51 actually you know go away from that a
38:54 bit okay thank you
39:00 you
39:06 how often does iOS updates or updates in
39:11 the actual platform break your stuff
39:16 well so the question is like how how
39:19 often does iOS and different platform
39:22 updates actually how often they do break
39:24 our stuff well not much actually because
39:27 we haven't really gone we try and keep
39:30 the code as platform agnostic still as
39:33 possible there are some occasions of
39:35 course when when something happens but
39:38 of course we're not yet in production so
39:41 we're going to see that quite a lot I
39:42 would say but currently it hasn't really
39:45 been that much of a problem and for the
39:48 big part probably because we try and
39:51 keep up the standards we try and
39:53 basically keep the code simple and clean
39:56 and we haven't really needed to do a lot
39:59 of platform dependent heavy
40:02 optimizations as of yet
40:10 any more questions first if you want you
40:17 can come talk to me after this
40:18 presentation as well okay if not then
40:21 let's give it around warm applause to
40:24 the mall thank you
Leave a Reply