We prepared this video as part of a demo of the framework I'm working on for my PhD. In a nutshell, we show a simple Internet of Things example, a brief explanation of how our framework works and how we applied it to the scenario. Hope you enjoy it!
16 Jun 2013project: Open Objects
Language pushes us toward classification. When we choose between one particular word and another we make a conscious decision to put a certain subject in a certain box. For example, when I say that today the sky was blue, I produce a mental image of a blue sky in your mind. So even if there were some clouds in the sky, I chose to classify it as a blue sky and you pictured a perfect clear sky. So, in a way, language conditions us into putting concepts in distinct boxes and, as we use language as a basis for thought, we, ourselves, become conditioned by classification. We base our morals and our conduct on classification: this is good and that is bad, I like this and I dislike that, this is green and that is blue. But obviously, these classifications, these morals, are different from person to person, from culture to culture. In the japanese language, for instance, the same word can mean both green and blue, therefore, native japanese speakers do not distinguish between the two colours in the same way we, westerners, do.
If we live and swear by language and by these boxes of classification, we often tend to be too reductive and if we don’t see past them, if we don’t look through the classification, we can’t really say that we fully understand a subject. Because both us and the world that surrounds us are so organic, analogue and varied! So all this to say that as I have a special interest for the in-betweens, the grey areas. The extremes are boring! There is a world between white and black, between right and wrong, between I like you and I love you. And those are exactly the places where all the interesting stuff happens! For a few years I have had this idea of capturing the grey areas as a photographic project but I could never make it happen, I could never find the right visual language. This is something that had induced a minor frustration in me. I have this interest, this strong feeling that I want to express but am not able. But from time to time I think about it, so I still think that I will find the right way to do it!
25 Mar 2013project: Grey Areas
Well, lets just say I've gotten better at this over the last couple of years. The left image was one of the first I've "scanned" with my DSLR, and the one on the right I've just rescanned using the techniques described below (higher resolution available here). Right now I can get higher resolution and better image quality that what street labs give you on CD.
I've seen many articles on the web explaining the basics of digitising film negatives or transparencies with a digital camera. The basics are quite simple: you take a photo of a negative into a light source and invert. That's it. But that alone led me to scan negatives that looked like the one on the left, above. Because I've never seen one tutorial that told me "the whole story" of how to do it properly, I've decided to put together what I've learnt during the last two or three years of scanning film with my DSLR.
First of all: Why?
These are my reasons, you may obviously have different ones. Some people do this because it’s faster than using a scanner, but that depends on how much time you spend post-processing, and I do spend a bit more than I would like to admit, but it is a time spent doing something that gives me pleasure, not pressing buttons on a poorly designed software and waiting for a tedious scan.
All the following instructions have the objective of achieving the best possible resolution, colour depth and dynamic range out of the film, while keeping image noise as low as possible. Also, I aimed at keeping the whole process as quick as possible. I think each time I’ve made a scan I’ve got better results than the time before, because I keep improving the process and now I’ve got to a stage I’m quite happy with the results.
I’ve separated this tutorial into five sections, and you may want to skip, or skim through some of these.
14 May 2012project: Photography
Lisdon the Metropolis:
This is the basic premises of the new project I've done with Amadeu Dias. By overlapping sounds recorded both in London and in Lisbon, we've tried to somehow eliminate the distance and create a metropolitan dialogue, whispered in our ears. The project aired on the 21st of October in Radio Futura, as part of the Future Places festival.
Due to the extensive use of stereo to differentiate both locations, for best results we suggest the use of headphones.
We, as the Double Exposure Collective, are currently exploring a similar concept but with images instead of sounds, by exposing the same 35mm photographic roll twice, once in each city. More on this soon!
25 Oct 2011project: Double Exposure Collective
This is a fully parametric extra shelf generated by a Processing code I've uploaded to Thingiverse. When I want another shelf, I measure my space and the things I want to store in it, and change the parameters of the application in order to generate a new extra shelf, perfect for my needs. Apart from being quite useful to me in my tiny student housing space, I've made this as an example of a vision of the future: Parametric Autonomous Super Stores.
Today we go to Ikea and we can choose a furniture set, combine modules, pick materials and go get the right products from the warehouse. Now imagine that we could not only choose modules, colours and materials but also each individual parameter of the furniture piece, such as the height, depth or the number of shelves and doors, from the store's website and preview them in place virtually overlayed on the video feed of our living room, using an augmented reality phone or tablet. When we finish configuring our furniture, we place the order and wait for it to be completed. Digital fabrication makes it cheap and easy for unique objects to be built. When we receive a confirmation that our new furniture is ready to be picked up we drive to the store, park the car in front of one of the picking places, and let the system scan our phone (or a printed page with a barcode, for instance). Some seconds after, the autonomous warehouse delivers our own unique furniture kit.
Now wouldn't that be cool?
31 Aug 2011project: Digital Fabrication
| Older >|
Page 1 | 2 | 3
© Paulo Ricca 2011 | Atom