Update: I've dropped the "2011" from the title, as I've kept this list more or less up-to-date every time I change a computer.
Other than Visual Studio, Photoshop, Office/Outlook, 3dsMax (/Maya/Modo/...) and Perforce, these are the tools that I always have installed on my development computer.
In strikethrough are tools that I used to install / might still be useful, but don't really rely upon anymore (these might go away as I update the list)
I don't use OSX for development at all, but I use it almost exclusively at home for photography and general computer stuff... So I included a few OSX tools too, even if you won't find as much programming stuff.
MUST-HAVE!Everything is by far the tool I use the most. It's life-changing. After installing it I also I limit windows search indexing options only to the start menu and email, as the latter is required by Outlook). Agent Ransack is nice too.
I still write a lot of plain .txt files, and using markdown gets you some formatting for free. TyporaTexts is my choice (OSX too), MarkDeep is nice, Marp can be cute as well (for presentations, there are many other similar ones). AsciiDoc.
LaTeX for publications - I use TexStudio and BasicTeX (a smaller version of MacTeX) on OSX (then manually add packages I need via the command line package manager).
Also for screen captures! I just use the desktop capture device and encode a mp4. On OSX, the built-in Quicktime allows do to screen recording.
Giffing and ScreenToGif are great for desktop capture too (using webM encoding), all other video capture programs I've found either are insane crapware or depend on proprietary codecs that you have to distribute.
Actually today Giphy capture works wonders as well (and it's on OSX too)
MUST-HAVEProcessing(also on OSX). LibCinder, OpenFrameworks, Pocode and Polycode are decent if you want a Processing alternative for C++.
MUST-HAVEC-Toy(also on OSX). Quite nifty! Tiny-C-Compiler integrated with some graphic drawing functions and wrapped with a file-monitor so your project live updates. Very useful to quickly test C algorithms!
AutoIt is NIFTY! I use it to craft quick GUIs around command-line tools or to automate GUI tools... It's really nice when you have to do a given thing over and over, and its basic-inspired language makes me nostalgic too. Also, is "portable", which I always prefer. AutoHotKey uses AutoIt scripting, but I didn't use it yet.
MUST-HAVE Acrobat Reader (even if I should probably prefer the less bloated SumatraPDF, that does not annoy the user with endless updates)
ProcrastiTracker... Also for "productivity" I like sometimes to use the "pomodoro technique", I have a kitchen timer on my work desk that seems to work best (I like it being physical and ringing), but ChronoSlider on OSX doesn't seem to suck as well (you'd be surprised how bloated or bad most timer apps are...)
RapidEE (environment variable checker/editor), also portable.
SharpKeys if I need to remap some of my keyboard keys.
Some keyboards emit "weird" scancodes (e.g. my wired Apple Italian keyboard) the only program which I've found to be flexible enough to recognize them is KeyTweak.
Speaking of using Apple hardware, if you have a laptop and you like your natural scrolling direction on the touchpad, WizMouse can enable that.
By the way, Win 8.1/bootcamp on my MBPr2013 does dragging horribly, but it seems better if you enable the (unrelated!) "tap to click" and "dragging" options in bootcamp. Trackpad++ is also related but I haven't tried it yet.
MUST-HAVEOn OSX when using the external mouse you might want to not use natural scrolling, while keeping it enabled for the trackpad. Scroll Reverser does that!
Other OSX MUST-HAVEis gSwitch to force integrated GPU only (or discrete only).
Background Music allows to change audio volume per app.
MUST-HAVESynergy for keyboard/mouse sharing across computers. There's also a fork called Barrier, which I haven't tried yet.
VirtualBox can be useful and it's free, even if I usually prefer VmWare (sometimes I use the Player with pre-made OS images).
HyperV, included with Windows, can be great too, it's not a bad idea to keep your different work environments in different VMs / drives nowadays, as drive space is not a big deal. On OSX, Parallels is really good.
I use Docker for all the times some python library are available on Linux only (e.g. happens for deep neural network stuff...).
Command-line / Terminal
I love Cathode on OSX. Also on OSX: CoolRetroTerm, which is opensource, but it's not quite as great.
MUST-HAVEFor OSX, I use Homebrew and Cakebrew.
I'm not really a command-line ninja, but I've started adopting it a bit more. I usually install tmux, nnn, tldr, a recent version of nano.
Window management various
1Up industries stuff is really good: Fences, Bins (7Stacks is somewhat similar, and free, emulates OSX stacks).
On OSX some people/setups seem to need SmoothMouse to avoid mouse lag
There are a lot of other tools that look nifty but I didn't end up using them often... Displayfusion looks neat but I didn't try it yet, the most interesting feature for me is placing a second taskbar with only the applications used on the second monitor there, MultiMon does it for free. A tiling window manager is good if you have a lot of screen space, like WinSplit(OSX alternative: SizeUP)
OSX: not a tool, but important, kill the lag in the dock autohide: defaults write com.apple.Dock autohide-delay -float 0
MUST-HAVEMathematica(OSX too) if the studio has a license for it (or I make them buy one!)
Mathics is an upcoming free Mathematica compatible system
Python Anaconda distribution that I already mentioned
GeoGabra can be useful when tinkering with geometrical constructions, it's quite powerful (currently the beta of v5 supports 3d too, and it's available "portable" as well) but slightly more focused on contraints than I'd like (I'd love something very interactive with optional constrained stuff, like a parametric CAD)
Some people swear by TikZ (examples), looks extremely cool but it's not interactive...
On the simpler side, and 2d only, DrGeo (portable as well) is great to tinker
SciLab which I prefer over Octave (that is though more compatible w/MatLab), but nowadays I don't really care about MatLab-like environments, I prefer either Mathematica or SciPy (Anaconda)
Having a hi-res screenshot tool in a game, even if you don't ship a "photo mode" to the user, is very common for marketing purposes (posters and so on). Now what I see commonly done for such things is to have a tiling renderer, that's to say, render the same scene multiple times, each time writing out a portion of the final screenshot. That's done because GPUs have limits in terms of rendering sizes and often the memory needed to store intermediate buffers is simply not there (especially if you're shipping on consoles).
The tiling strategy often results in being a pain for two reasons: first, you have to scale all pixel-based effects in order not to shrink in the final screenshot*, second, tiles create non symmetric frustums that are tricky to handle, as assuming a symmetric frustum ends up allowing simpler equations in a few places (i.e. in shaders that go from the z-buffer to view space and back).
So here it is a recipe for a different screenshot tool. 1... 2... 3...
Instead of generating N tiles, shift your viewport (yes, as in the viewport matrix) of a fraction of a pixel N times (in a grid fashion). Then instead of collating the tiles next to each other, take each generated image and interleave the samples, like this:
Four two-by two images. Image one has no shift, image two is shifted by half a pixel on the x, image 3 is shifted by half a pixel on the y, image 4 is shifted by half a pixel on x and y
Ok... not quite! I lied. Did you spot the error? A hint... pixel footprint...
Each image will cover one quarter of the scene in the final image, but the GPU does not really know. It will still generate all derivatives as if it was not a high-resolution image, thus all the mipmaps will be wrong in the final image (too blurry). The solution there is to mipbias all your samplers (it should be easy, in your material system probably there is only a point where the textures of a given material are bound), or even more easily just set the maximum mip level to be zero, as usually very high res screenshots will hit the top mip anyways.
Erhm... I lied again! Sort-of. This method does indeed work, but it's still not correct. We're shifting the view, it means that our images do not share the same view center, so we're not using a pinhole camera anymore (or well, we made that pinhole non infinitesimal). In pratice this is not a big deal, even if it can still screw some math (in a couple of SSAO implementations I've worked with, I had to disable the shift when generating the AO map). Of course, to be correct, we should rotate the camera...
Or should we? Ok enough toying. Rotating the camera is better but still not 100% right, as now we're also slightly rotating the near and far planes for each image. Again, it's not something noticeable, but in the end to be entirely correct we should maintain the near and far straight, and that will lead again to non-symmetric frustums (even if only a tiny bit).
Of course, once you have any method to generate high res screenshots, it's easy to do many more cool tricks, like accumulating images to simulate DOF and motion blur and antialiasing, rendering soft shadows and so on.
Note: * you'll still need to have such functionality if you're shipping a PC title that can do different resolutions. And it's quite a pain, because if you just express all your screenspace effects in relative units instead of pixels you get the right "size" of the effect, but you will also have less dense samples, and thus not really scale the quality with the resolution. Many effects will actually look worse (aliasing artifacts, i.e. in a bloom we can get shimmering), others do rely on accessing neighboring pixels (think about fast Gaussian filters that leverage the bilinear interpolation between samples).
Add to your bug tracking two compulsory fields to be filled (choosing from a list) when closing a bug: where it was found (AI, Rendering, Loading etc...) and what caused it (Incorrect API usage, corner case not handled, memory stomp, wrong algorithm, variable not properly initialized, invariants not enforced and so on). Soon you'll see what parts require rewriting, what need unit testing, automated tests and so on.
Give testers a build connected to a sampling profiler. Sample. Diff the profiled function list with the list of functions in your symbols file. Voila', cheap coverage analysis. Now we know which code to delete!
Install automatic time tracking tools, like this one. Anonymously gather data about your project (be nice). At the end, you'll know how much time was spent in which area (much better than measuring the number of changes or check-ins or so). So we'll know which things are better to be moved into a scripting language, or into a dynamically loadable module!
It's also a good idea to keep track of the CPU time of some key processes, like your compiler, linker, the time your VCS spends in networking operations and so on. Unfortunately I don't know any program that does that on windows and I ended up writing a simple one a year ago to prove we were spending too much time waiting for linking.
Do you have a in-game telemetry? Are you not using it internally on testers machines to gather data (memory usage, FPS and so on). Shame on you!
Little idea. After reading this post about bools, chars and bitfields, I started thinking of an ugly c++ hack.
What if we defined our own custom bool (chances are that you already have one around in your engine, if not it's probably not a good idea to add one) that it behaves like a bool (char sized or whatever) always, but it strictly stores only 0 or 1?
With such a thing we could make sure when returning a native bool out of it, that what we store is still either 0 or 1 and thus have a mean to identify some memory stomps in many classes that have boolean members, for free.
Of course this "idea" could be extended, via templates, to have ranged integers and so on, but that would start being really ugly...
There are so many interesting langauges that are gaining popularity these days, I thought it could be interesting to write about them and how they apply to videogames. I plan to do this every year, and probably to create a poll as well.
If I missed your favorite language, comment on this article, so at least I can include that in the poll!
Now, before we start, we have to define what "videogames" we are talking about. Game programming has always been an interdisciplinary art: AI, graphics, systems, geometry, tools, physics, database, telemetry, networking, parallel computing and more.
This is even more true nowadays that we everything turned into a gaming platforms: browsers, mobile devices, websites and so on. So truly, if we talk about videogames at large there are very few programming languages that are intresting but not relevant to our business.
So I'll be narrowing this down to languages that could or should be considered by an AAA game studio, working on consoles (Xbox 360 and PS3) as its primary focus, while maybe still keeping an eye on PC and Wii. Why? Well mostly because that's the world I know best. Let's go.
Where and why we need them: Code as Data, trivial to hotswap and live-edit, easier for non-programmers. Usually found in loading/initialization systems, AI and gameplay conditions or to define the order of operation of complex sub-systems (i.e. what gets rendered in a frame when). Scripting languages are usually interpreted (some come with optional JITs) so porting to a new platform is usually not a big deal. In many fields scripting is used to glue together different libraries, so you want a language with a lot of bindings (Perl, Python). For games though, we are more interested in embedding the language and extending it, so small, easy to modify interpreters are a good thing.
Current champion and its strenghts: Lua (a collection of nice presentations about it can be found on the Havok website). It's the de-facto standard. Born as a data definition language it can very well replace XML (bleargh) as an easier-to-parse, more powerful way of initializing systems.
It's also blessed with one of the fastest interpreters out there (even if it's not so cool on the simpler in-order cores that power current consoles and low-power devices) and with a decent incremental garbage collector. Easy to integrate, easy to customize, and most people are already familiar with its syntax (not only because it's so popular, but also because it's very similar to other popular scripting languages inspired by ECMAScript). Havok sells an optimized VM (Havok Script, previously called Kore).
Why we should seek an alternative:
Lua is nice but it's not born for videogames, and sometimes it shows. It's fast but not fast enough for many tasks, especially on consoles (even if some projects , like Lua-LLVM, lua2c and MetaLua could make the situation better). It has a decent garbage collector but still it generates too much garbage (it allocates for pretty much everything, it's possible to use it in a way that minimizes the dynamic allocations but then you'll end up throwing away much of the language) and the incremental collector pauses can still be painful. It's extensible but you can only easily hook up C functions to it (and the calling mechanism is not too fast) while you need to patch its internals if you want to add a new type. Types can be defined in Lua itself, but that's seldom useful to games. There is a very cool JIT for it, but it runs on very few platforms.
Personally I'd trade many language features (OOP, Coroutines, Lambdas, Metamethods, probably even script-defined structures or types) for more performance on consoles and easier extensibility with custom types, native function calls (invoking function pointers directly from the script without the need of wrappers) etc...
Present and future alternatives:
IO. It's a very nice language, in some way similar to Lua (it has a small VM, an incremental collector, coroutines, it's easy to embed...) but with a different (more minimal) syntax. Can do some cool things like binding C++ types to the language, it supports Actors and Futures, it has Exceptions and native Vector (SIMD) support.
Stackless Python. Python, with coroutines (fibers, microthreads, call them as you wish) and task serialization. It's Python. Many people love python, it's a well known language (also used in many tools and commercial applications, either via CPython or IronPython) and it's not one of the slowest scripting languages around (let's say, it's faster than Ruby, but slower than Lua). It's a bigger language, more complex to embed and extend. But if you really need to run many scripted tasks (see Grim Fandango and Eve Online presentations for examples of games using coroutines), it might be a good idea.
Scheme. Scheme is one the the two major lisp dialects (the other one being Common Lisp). It's very small and "clean". Easy to parse, not to hard to write an interpreter, and easy to extend. It's Lisp, so some people will love it, some will totally hate it. There are quite a few intepreters (Chicken, Bigloo, Gambit) that also come with a scheme-to-C compiler and some (like YScheme) that have short-pause GC for realtime applications, and that's quite lovely when you have to support many different platforms. Guile is a scheme interpreter explicitly written to be embedded.
TCL. Probably a bit "underpowered" compared to the other languages, it's born to be embeddable and extensible, almost to the point that it can more be seen as a platform for writing DSL than as a language. Similar to Forth, but without the annoying RPN syntax. Not very fast, but very easy.
Gaming-specific scripting languages. There are quite a few languages that were made specifically for videogames, most of them in reaction to Lua, most of them similar to Lua (even if they mostly go for a more C-like syntax). None of them is as popular as Lua, and I'd say, none of them emerges as a clear winner over Lua in terms of features, extensibility or speed. But many are worth considering: Squirrel, AngelScript, GameMonkey, ChaiScript, MiniD.
Other honorable mentions. Pawn is probably the closer to what I'd like to have, super small (it has no types, variables are a 32bit "cell" that can hold an integer or can be cast to a float, a character or a boolean) and probably the fastest of the bunch, but it seems to be discontinued as its last release is from 2009. Falcon is pretty cool too, but it seems to be geared more towards being a "ruby" than a "lua" (that's to say, a complete, powerful multi-paradigm language to join libraries instead of an extension, embedded language) even if they claim to be fast and to be easy to embed. Last, I didn't investigate it much, but knowing Wouter's experience in crafting scripting languages, I won't be surprised if CubeScript was a hidden gem.
Roll your own. Yes, really, I'm serious. It's not that hard, especially using a parser-generator tool. AntLR is one of the best and easiest (see this video tutorial). Interpreters are "easy" and LLVM can be used to write a native compiler that targets consoles (Wii, Ps3, 360 can all be targeted with it) for optimized builds.
Where and why we need them: For "high level" here we mean languages that "higher than C". We seek for features like type-safety, serialization, reflection, annotations and introspection, runtime code generation, dynamic loading, native threads, better tools integration (refactoring, automated coverage and testing), object lifetime management and so on and on. Usually, they are based on a VM and a JIT, and approach (in theory could even surpass, due to runtime optimizations) the speed of native-compiled systems.
Such languages can be used for most of the game code, even without loss of speed as anyways most games end up implementing such features in their own, often slow, cumbersome, error-prone ways (i.e. reference counting for lifetime management, macros and templates for reflection and so on).
Current champion and its strenghts:
By far nowadays is CLI (plus CLS) and C#. For a while there was some fascination with Java, and some PC titles shipped using it, but today C# wins hands down. It's the de-facto standard for tools, and it's getting into the shipped titles as well. Unity is one of the most used game engines out there, and it's based on C#. Microsoft XNA puts C# on the 360. Some games on consoles already shipped using C# in their runtime.
Mono is a strong opensource implementation, it has a JIT, an ahead-of-time compiler and an interpreter. It can even use LLVM as a backend (LLVM targets . Also, CLI supports many many other languages (notably, F#), including dynamic ones via the DLR, so it can serve for scripting as well (see IronPython, IronRuby, IronScheme, Boo and so on...)
Why we should seek an alternative:
We shouldn't. Maybe one day we will, but right now it would be lovely to try to have more and more game code written in a higher level language, instead we still rely mostly on the systems language plus some limited scripting. CLI is our best bet so far.
Present and future alternatives: JVM languages. The Java Virtual Machine was cool, and nowadays still hosts many interesting languages (notably, Clojure and Scala). But the truth is, most of them are also available for the CLI, the JVM does not have any technological advantage over the former (actually it can be a bit harder to compile) and we don't have an equivalent of the excellent Mono project for it. Surely there are some awesome JIT compilers, LLVM support via the VMKit Project, even code hotswapping frameworks, but it seems that it's loosing momentum quickly for gaming and it's being pushed more and more into the server realm.
Erlang is another language based on a VM that is having quite some buzz. It was meant for servers and it shows, but more and more games are looking into coroutines and actors for parallelism (Mono added coroutines mostly for games), and Erlang was made specifically for that. It's a functional language that revolves around actors for concurrency. It supports code hot swapping natively and that alone is enough to put it in this list.
Mono is a CLI/CLS implementation, but it's worth noticing that it added enough extensions to be considered a platform on its own. Some are compiler extensions that do not change the language (i.e. programs using Mono.Simd will run on any CLI, even if they won't probably use Simd instructions to speed up the computations), but some are not, notably Continuations support (and for example Unity heavily relies on that) and the uber-cool assembly injection support.
OCaml. ML-family languages in a perfect world would rule Algol-like ones. ML is neat, strongly statically typed but with powerful type inferencing, functional but impure and strict. All with a syntax that really makes sense, not so minimal as the Lisp family is and way easier (and way less research-oriented) than Haskell. OCaml is one of the leading ML dialects, probably the most used one (together with the already mentioned F#). It has a good optimizing compiler that works on a number of platforms. Realistically? It won't happen yet.
ML is a great language to know and use but it's too far removed from Algol (see LangPop and Tiobe, for which programming langauges are well known...) to be successful in my opinion (maybe F# could change that...), also there are many nice dialects but no big standards with multiple compiler implementations, that is what you really need to be "safe" when choosing a systems language.
Haskell. It's perhaps surprising that this one made into the list. At least, it surprises me, I was "forced" to add it due to the demands that I had here and on the survey. I guess a lot of its "popularity" among video game programmers is due to this paper by Tim Sweeney, where he uses Haskell to demonstrate how types can help our job.
Haskell was born as a research tool, a "definitive" functional language capable of supporting many different models of computation. It's a purely functional, strongly typed language, so you can "reason" about it formally. It's lazy by default. Now while being lazy, functional and strongly typed is undoubtedly nice, I don't being pure or lazy by default is.
Surely, in theory having no side-effects makes automatic parallelization possible, and Parallel Haskell does that neatly but in practice the results are not great. And it's true that monads are not that hard (this tutorial is neat) and there are plenty of nice resources on the language, but I still don't think purely functional data structures are something that most people will easily understand or be able to write (in fact most basic data structures that we take for grated in the mutable programming world, are still considered great research topics when turned into persistent versions).
Reasoning about space and time performance of an Haskell program is also very hard, and while I do think that a good language should decouple the logical representation of data to their physical layout, I also think we need still to be able to strictly control both. In some domains, more declarative languages are surely welcome.
I do think that Haskell is a language that needs to be learned, and it's great to experiment with. But I personally don't see it as something we will use in the foreseeable future.
Where and why we need them: High performance, core code. Languages in this tier need to be able to directly manipulate memory, need to support all platforms functionality, even better, support inline assembly. Statically compiled, strict languages. Used for computational kernels, core data structures and direct interface with the hardware.
Current champion and its strengths:
C/C++, by far. It's the only language that you will find on all the platforms. It's compilers are usually the fastest, and they get augmented with all the features needed to fully use the target hardware (i.e. SIMD extensions and so on).
Native platform libraries interface with them and only them out of the box. In other words, right now for any practical purpose, that means that ANY other language in this category has to have an easy, fast interface with C/C++ or compile to (generate) C/C++.
All the programmers in your company know at least some C and C++.
Why we should seek an alternative:
C is too low-level for the needs of huge projects as modern games are, it tends to become cumbersome if used for large parts of your source.
C++ is made of evil. Probably not many in your company really know all of C, surely no one fully understands C++.
Also, both C and C++ are showing their age in some respect. Important concepts like SIMD operations are supported only through compiler extensions. Other language features made sense ten years ago but not too much now (for example, short-circuiting boolean operations generates branches that often cost more than what they save).
Present and future alternatives: D. D is C++ done right. There is a proprietary compiler from Digital Mars and frontends for GCC and LLVM. It's an interesting, well made language, supported even by Alexandrescu, an hardcore C++ guru.
It looks like C++ but it's much simpler (it drops all C compatibility, the preprocessor, multiple inheritance, forward declarations and include files, non-virtual member functions and so on) and much more powerful (sane memory management, sane templates, first-class functions and closures, immutable structures, modules, threads, contracts, dynamically compiled code). It still misses some needed features (like runtime reflection, even if I hear it's not hard to add as it supports compile-time reflection), but most of it is there.
It's major obstacle to success is the fact that is similar to C++. So similar that is questionable if it's superior design is worth the trouble of migrating to a language that still does not guarantee support on each platform out of the box.
Objective-C. I don't have enough experience to write about this, but I had a few comments by programmers advocating it, so I'll have to include it here. It's already used in games as it's the language of choice for Apple devices, but I don't know if it could be suitable for AAA games.
Go. Go is a very recent language created by Griesemer, Pike and Thompson at Google. That's enough to be worth considering. Currently there are a few compilers, notably one in the GCC stack. It's much simpler than C++, it only supports interfaces but it does not need to declare inheritance, it's garbage collected, it does not permit pointer arithmetic and it supports threads and tasks (via "goroutines"). One of its goals was to make software development faster, and dependency management easier. That's exactly what we need. Go is still too young, but it surely needs to be followed.
Rust is an experimental language by Mozilla Labs. It's still in development and its syntax is not finalized yet but it looks incredibly promising. It currently has a compiler that uses LLVM as its backend. It's memory safe, immutable by default, concurrent (with coroutine and actor support), it has first class functions and a neat way of expressing and enforcing invariants at runtime and compile-time. It also supports "localized rule-breaking" meaning that safety rules can be broken "if explicit about where and how".
C. If we manage to port more and more code to a language in the previous category (higher level) then we might not need any neat feature in our systems programming language, just sheer speed and compatibility with the hardware.
C would be a great choice, and it often manages to implement features that are important for performance before C++. It's also way easier to parse and reason about than C++ so it's better for tools, it's easier for compilers to work with it (good luck finding a 100% compliant C++ one) and so on.
Many people in the industry are looking back at C or going for a more C-like programming style, I even heard talks of projects to extend C with novel object models (via code generation) to avoid C++.
OpenCL. Yes, it's a language for GPUs. That's to say very powerful processors made of many low-power, in-order, shared-cache parallel processing units. In other words, the future. We need a language for data-parallel computations, and OpenCL seems to be a reasonable choice. I would personally prefer something even more restrictive and stream-oriented, with well-defined inputs and outputs (easy to check for correctness!) and means of connection and buffering between the kernels, but OpenCL is the standard and it's supported by every GPU vendor. Moreover, we already have several CPU implementations, like FOXC that spits out C code from OpenCL, Intel OpenCL SDK for x86 CPUs, IBM OpenCL for Cell. There are also similar initiatives targeting NVidia's proprietary Cuda language, like GPUOcelot. Of course LLVM plays a major role again, even AMD's GPU compiler seems to be based on it, and there are both backends and frontends in the works for it.
Your own C/C++. To a degree, everyone is already using their own version of C++ incompatible with everyone else's. That's because to use C++ in production you have to both restrict and extend it. That's usually done with coding guidelines and reviews plus custom libraries with a sprinkle of preprocessor macros (and setting your compilers to the maximum warning level plus enabling warnings as errors). In other words, horribly.
It is possible, with tools, to do better. Parsers and rules can be used to restrict the language in a well-enforced way. There are a few of such tools and they are mostly commercial (cppcheck is a notable exception, parsing and understanding C++ is a nightmare) like PC-Lint, Coverity, Lattix (for dependency analysis and design rules enforcement), PVS-Studio and so on.
Parsers can also be used to enhance the language, by gathering and exposing information, either to be directly used (i.e. reflection) or to be fed into code-generation tools (to generate bindings or for serialization etc). It's a very interesting possibility, and we have a number of different ways to extract information.
Compilers can help, it's possible to parse their parse symbol and debugging files, some can even directly output useful data (most notably, GCC and CLang with libCLang, even Visual C++ considered this but I don't see it happening yet). Doxygen also has a good parser that can extract quite some interesting information, SWIG can parse headers and there are also a few parsers dedicated to language extension and analysis, like PyCParser, CTool and CIL for C, Transformers, OpenC++ and VivaCore (that is used by the PVS-Studio linter), ELSA, Harmonia, that tries to build an incremental tool for refactoring and program transformation, and commercial ones like Understand.
Last but not least, code generation: after parsing either vanilla C++ or an annotated version with custom extensions, for most practical uses we'll need to emit some code.
This is for example, the approach followed by QT's MOC compiler to implement their object model. Some tools support full source-to-source translation (parsing and emitting code), like the DMS toolkit, Stratego/XT or the ROSE compiler.
But code generation is not only useful for source-to-source transforms and language enhancement, it's also vital to bridge data and DSLs to C++ and in general to create interfaces between different systems. That's the approach used by object models like COM, or data formats like Google protobufs and many other domains. Many frameworks approach this as a text generation problem (like Cog and Cheetah) similar to what JSP does for web pages, but there are also generators capable of working directly from a syntax tree or something in the middle between the two, like cppgenmodel and rGen.
Ok so, let's say I have a few months and I have to write a new game engine. What would be my best bet? Well I'd have to toss a coin between CLI+C and Lua+My own C++.
The CLI is a very good platform, and it's surprisingly well supported in gaming. For current-gen it would be a no brainer, Mono already supports all that you need and even rolling your own CLI compiler with LLVM is not an impossible task at all (and you can generate symbols to be able to use existing debuggers/profilers and so on). Microsoft will surely continue to push it even on future platforms, and other vendors and many studios are interested in the technology as well. The only problem I can see is that you won't get a 100% guarantee that you can support any future platform day-zero, as we are not seeing vendors shipping their own CLI for general game development yet.
The other option is to invest in a good, proprietary data/object/module/service/task model for C++, banishing its own OO system, restricting object usage mostly to services to be contained in dynamic linkable, hotswappable modules and to data structures. A generic data model with reflection and serialization support could be used for communication between modules, RPC, persistence and data parallel computations. All this wrapped in some DSLs to code-generate all the fluff. It won't be any easier (personally I think it would be harder, for me) than writing a CLI compiler with LLVM from scratch, and it will be much messier, but 100% future proof.
AI-specific languages. I believe in a world where no game will need to implement a state machine. There are quite a few "AI" languages, from the obvious Prolog family to constrain programming languages to more general frameworks like Soar. I'm not an AI expert, and I don't believe that we'll ever see one of these in a game, but they are great inspirations for DSLs, often embedded into other languages (internal DSLs).
Languages for number crunching. As I wrote for the scripting and AI-specific language, there is a lot of inspiration to be found looking at "obscure" task specific languages. Parallelism, concurrency and memory bandwidth are not only problems nowadays, they were problems even for the early supercomputers. Most of these languages are not going to be ever seen in a game (and many of them are hard to be seen in applications anywhere!) but understanding them is great, especially if you're going to design a DSL.
Nowadays "functional" is considered cool because it should make concurrency easier (immutability, no shared data). It's patently false, as demonstrated by the practice: there is no purely functional language that scales well for number crunching (and yes, I know how Haskell can parallelize things using sparks). While there are purely imperative languages that easily scale to thousand of processors (CG, HLSL, on GPUs). Mutability is not a problem (especially if you don't share...), controlling data is! Making a list of "interesting" languages is pointless, but I'd suggest here to keep an eye on dataflow languages, like Sisal and Sa-C.
Source code - a DVCS, like Mercurial Source art - A distributed, versioning (copy on write) filesystem. Lustre and ZFS (combined). I also hear nice things about using a VCS plus a dependency tracking system to sync only what's needed for a given task (see Shotgun and Tactic)... Build art/code - continuousbuild machines publishing tested builds to a distribution system (i.e. bittorrent)