…Participation for marks? Bonus points for initiating.
I agree with the comments in class concerning the plateauing trends for previous technologies; though we may note that vinyl is making a comeback with the advent of the newer record players having built in ipod docks. What overtakes video games? I suspect we are reaching the categorical limit of progression for “mediums”. What I mean is, if we’re willing to call an artificial reality (like that of World of Warcraft) a video game, then I can only imagine the inputs/outputs becoming more sophisticated. Should things become so sophisticated that it becomes less reasonable to use the term “video” or even “game”, I would submit that the replacement would be “synthetic reality”.
We use console controllers, mice/keyboards, microphones, webcams etc as input devices to control our digital avatar. These inputs will become increasingly complex in order to allow for better control of our digital system. Our technology is growing though. Soon our dictation won’t require pressing of the ‘B’ button to force our digital representation to jump, we will simply think “jump” and an input device will read our mind [this is old technology not science fiction…http://www.emotiv.com/ ].
As things like Emotiv’s EEG helmet gains a higher resolution of neural processes we may find we can better represent ourselves in a virtual world. Concurrent growth in output devices [http://www.vg247.com/2012/09/11/xbox-720-microsoft-patents-projector-tech-turns-rooms-into-game-worlds/] will inevitably allow for the creation of new artificial worlds for us to explore. The resolution of these artificial worlds may (at some point in the relatively near future) surpass that of our “real life”. Should modern medicine + robotics keep its current trend in progression, I suspect we will spend more time in the artificial reality than the “real world”. You think intellectual property law is archaic now? Imagine the imaginary wars fought over artificial space simply to control attention (the new currency…) of “users”.
The artificial reality will become more “real” to the infinitely connected users (the I/we/us/all of the internet), sharing all knowledge in an introspective frontier of limitless potential stories/lives/worlds/universes. It may not have escaped some of you programmer types that such resolution and potentiality is possible with the usage of fractals and only drawing portions of an artificial reality that are currently relevant – like the direct drawing effort of your graphics card. Everything beyond is “potential”, and won’t be decided upon until the user/observer attends to that portion of reality.
Sort of like the observer effects in quantum physics [http://en.wikipedia.org/wiki/Observer_effect_%28physics%29], or the uncertainty principle with Schrödinger’s cat. All potentials are concurrent until they are attended upon, at which point reality “makes a choice” and is drawn/physically actualized.
For those versed in science in general, it may not have escaped you that OUR reality has a theme of fractal iterations. Everything seems to be connected in some strange way that our finite minds don’t quite comprehend, but we notice that an atom seems similar to a solar system, seems similar to a galaxy; or a cell in some way resembles a metropolitan city in someway resembles a brain in some way resembles etc…
So is the end result us creating our own realities, young deities in our own right taking a shot-gun approach to understanding ourselves by delving further and further into our own minds? Is this not what we call a “game”? Restrict the rules by limiting our own omniscience in an attempt to forget the fact that we’re all alone? Varying the rules of each reality in hopes that something unrecognizable might be produced within ourselves? The infinite path to omniscience paralleling the holy path of self reflection?!?
To quote DMX, one of the greatest poets of our time: “You think it’s a game? You think it’s a F^%$^%ing GAME?”
– TD (gamesprite/god/law student)
Awesome post Tyler.
Not sure I could possibly agree more that what is probably “next & beyond” occupies some combination of virtual reality, augmented reality (real world/digital world interface – another example – Google Glass – ( https://plus.google.com/+projectglass/posts ) and other stuff. Add in future generation Maker Bots ( http://www.makerbot.com/ ) as the other stuff and/or brain impulse controllers (http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html ) and maybe we start getting closer.
What occasionally freaks me a bit is the possibility that we are already in the next generation of video gaming (quite literally) and don’t know it yet as suggested by Nick Bostrom of Oxford. Don’t read further if you have a hard time sleeping when the questions get really existential: http://www.unmuseum.org/notescurator/videogame.htm
– jon festinger
Interesting post Tyler- I can’t help but wonder if and when we get to that point of synthetic reality, we can even call it a game anymore. I also wonder how far it will go and how far we will take it. Will we be able to experience physical pain one day when our avatar dies? How will this be regulated? Can it be regulated? Will hardcore gamers simply mod around any restrictions in place? So many considerations!
On the subject, a slightly chilling but excellant short film called “Sight” that came out a few months ago. Well worth 7 minutes of your time. Try to count the privacy issues….
Access through Huffington Post piece for a bit of context. Be sure to go full screen – best to go right to Vimeo link (also on YouTube if you prefer).
‘Sight’: Short Film Takes Google Glasses To Their Logical Nightmarish End (VIDEO):
http://www.huffingtonpost.com/2012/08/03/sight-short-film_n_1739192.html
Very chilling short film thanks for sharing! The scary thing is that we are not too far off from a life like this. Even today apps are so pervasive and there are virtually apps for everything. There is a fine line between using technology to better out lives and using it just for the sake of its existence – the latter being something that apps provide the impetus for.
Great post Tyler I couldn’t agree more. As the line between virtual and reality blurs will the characterization of “game” be relevant? How will the laws of the real world accommodate the virtual? Even with technology today, many of the “hardcore” gamers identify more with their online avatar than their real world self. Will the law adapt to protect their interests?
It’s clear that a world like “Sight” posted below is not far off. Social media already incorporates many game elements into our everyday life (check-in to earn reputation at a particular location, build your travel map on facebook, etc.). “Games” like Ingress have taken this a step further making a virtual game out of everyday sight-seeing. It will be interesting to see whether the law (legislators) takes a proactive stance on this changing landscape or waits to react once it’s already part of our everyday life (see lack of/delayed response to the proliferation of digital media and copyrights).
One more interesting development:
CES 2013: Microsoft Introduces IllumiRoom – Geek Magazine
http://www.geekexchange.com/ces-2013-microsoft-introduces-illumiroom-32547.html
One the subject, here is a timeline based version of future of the net. Having a hard time visualizing what exactly video games will be in this world, but they may well be a real time version of puzzle/tag/d&d/wargames. Tuneable information from massively multiplayer games where you are always “in the game” (as opposed to being “the controller”) may be where this is headed.
The End of the Web, Search, and Computer as We Know It | Wired Opinion | Wired.com
http://www.wired.com/opinion/2013/02/the-end-of-the-web-computers-and-search-as-we-know-it/
One more thing. Sorry for the double-post from News of the Week, but belongs in this thread on the future of games as well.
Read this and try and imagine a game where all the gamers have super-advanced maker bots that create artificial pseudo-humans….Now combine with the timeline version of the web in the story above. Now imagine the game is structured live Eve Online – massively multiplayer, real time…….
100 years from now….
Edinburgh researchers first to 3D-print with human embryonic stem cells: http://www.theverge.com/2013/2/6/3957772/edinburgh-researchers-first-to-3d-print-human-embryonic-stem-cells
jon
One more piece piece of the puzzle. Pondering the future of what games need in the future, a developer calls for actual in-game crime: “Real Crime in Virtual Worlds”
http://www.gamasutra.com/blogs/JohnKrajewski/20130214/186639/Real_Crime_in_Virtual_Worlds.php
jon
…& another one “Microsoft’s Xbox chief predicts ‘we’ll all be wearing 10 sensors’ in the next decade” http://mobile.theverge.com/2013/3/7/4075196/microsoft-don-mattrick-sensor-predicitions
jon
I can’t seem to get this site to let me author my own post, but I think this fits well as a response to Tyler’s post too, so here is:
Second Take: Jacob Todd on what’s “next” for games
Some of you may have seen an article in the New York Times yesterday about Lockheed Martin’s developments in the field of quantum computing (http://www.nytimes.com/2013/03/22/technology/testing-a-new-class-of-speedy-computer.html?pagewanted=1&_r=1&smid=fb-share&). Although the future of this technology is still uncertain, researchers believe that this technology could exponentially increase a computers processing power in the near future.
One way in which this technology could affect the video game industry has to do with the quality of video game engines. In class, some have made the argument that video games have reached the point where more pixels does not necessarily lead to better graphics and that video games graphics may be reaching the limit of their potential. While I agree with that notion to some extent, I suspect that in the near future video game developers will begin to transition away from traditional animation techniques toward a reliance on importing graphics from real world objects or environments.
Some of you may remember a highly controversial YouTube video that was posted by a video game company named Euclideon in 2011 (http://www.youtube.com/watch?v=00gAbgBu8R4). In that video, Euclideon claimed to have developed a technique that allows their video game engine to process “unlimited” amounts of data without relying on quantum computers. One of the alleged benefits of this technique was that it enabled their video game engine to visualize the humongous data sets collected through advanced scanning technologies. To some extent this was old news, as EA and other companies have been using scanning technology to create graphics for basic objects like soccer balls for years. What was novel about Euclideon’s approach was that it enabled scanned objects to be visualized to such a degree of detail that a person would be hard pressed to distinguish the real world object from its in game representation. Furthermore, the video game engine would allow for an infinite number of these incredibly detailed objects to be visualized on screen at the same time.
In the years since posting that video, various geospatial companies have begun utilizing Eucliedon’s technology to create limited interactive 3D environments that are beginning to look a lot like more explorable versions of Google Street View (http://www.youtube.com/watch?v=5086PjOKge0). The applicability of these demonstrations on video games is admittedly limited for the time being, but in a few years scanning and data processing technology may progress to the point where the main function of your Kinnect is to import a picture perfect representation of a user’s surroundings (house, street, neighborhood, etc.) into their video game.
Although it appears that Euclideon may have made some unsubstantiated claims, I think their YouTube video serves to illustrate that the video game industry hasn’t yet plateaued in its pursuit of attaining higher levels of realism. I’ll agree with Tyler that, for the moment being the gaming industry seems to be more concerned with fostering deeper levels of interconnectivity (I personally am not a big fan of the ps4’s focus on social media) and creating more input/output options (Microsoft’s projector, motion sensor controller, etc.), but I can’t help but think that all of that is just ancillary to the larger goal of creating more realistic looking games.
It is also interesting to speculate about the potential legal issues that may arise as a result of increased utilization of scanning technologies. Obviously the legal implications for a future technology are at this point unclear, but one could expect that privacy violations would a serious issue. Google Street View already deals with this to a certain extent by blocking out peoples faces. The ARMA developers who got arrested in Greece earlier this year for taking photos of a Greek military base also serve as a warning against unrestricted environment recreations in games (http://www.forbes.com/sites/erikkain/2013/01/15/arma-developers-will-be-released-from-greek-prison-on-bail-free-to-return-home/).
Hey, interesting post. I think you’re probably right in saying that scanning in graphics is going to be the future of gaming, but I wouldn’t necessarily agree with you that the recent trends in interconnectivity are purely ancillary to the future of the gaming industry. I came across this video a while back that explains how one company is using scanning technology and interconnectivity to make a racing game with real graphics where the player gets to race alongside and against professional racecar drivers by transmitting the real cars onto the virtual recreation in real time using GPS technology. It’s kind of cool, but the video is old and I haven’t heard any follow up on the game so it looks like they have run into some of the processing problems you mention: http://news.bbc.co.uk/1/hi/programmes/click_online/8334595.stm