Good Browser, Bad Browser

Good Browser, Bad Browser is a take on the blurry lines between personalisation and privacy. We created an interactive installation that conveyed to people the privacy drawbacks of using a protected browser and an unprotected browser.

Good Browser

Bad Browser

Overview

The project aims to intervene in the recommendation system algorithm that is used to personalize user experiences on the internet.


The interactive installation was created with the aim of communicating to users in a captivating manner the magnitude of data being collected from their devices. Showcasing the difference between safe private browsing and open browsing.

Team

Eshwar V, Harsh B, Mrinalini K

Timeline

4 Weeks

The Context

Our goal was creating an intervention to bring out the flaws of an algorithm. We chose the personalisation algorithm that is used by most of the digital services. We specifically chose to work with the concept of browsers and the data collected by browsers.


The Recommendation System algorithm can collect a vast amount of data on users' activities, preferences, and behaviour, creating a detailed profile of each user. This can help the user achieve their goal and find what they want with ease, but at the cost of their privacy. What would the user prefer? Which one is good, which one is bad? These thoughts led to the experiment we conducted.

is it really worth it?

Personalisation Vs. Privacy

The final installation

The Goal

To visually represent these differences in the installation, we decided to project two browsers side by side, with the left browser representing the protected browser and the right side representing the general unprotected browser. Both browsers would be set to perform the same task of booking a flight to Goa. As users walked in front of the browsers, they could see the visual differences in the search results recommended by each browser. This visual depiction would allow users to see how their personal data is being used to tailor their online experience and raised awareness about the potential risks of online tracking and profiling. The installation won’t label any browser to be ‘good’ or ‘bad’. This is open to the interpretation of the user experiencing the installation. Would they prefer a private browser that may not help them get to where they want or an unprotected browser whose targeted results help them achieve their goals, at the cost of their personal data?

Our Process

Eventually, we figured out that Processing and OpenCV can be integrated, and even though OpenCV is complicated, we could just use Processing for the same. Throughout the development phase, we were heavily dependent on ChatGPT to help us with the codes, tools, etc. We asked GPT for potential tools for a concept like ours, and it gave us really insightful results.


Initially, we wanted to see how object detection works on Processing. We tried two things: just using Processing to detect an object, and then using OpenCV to detect an object. Understandably, it worked better with OpenCV because the library is built for real-world object/motion detection. We were really amazed to see that our laptop cameras could actually detect us separately and draw something over us.

Adding Complexities

We wanted the users to walk in front of the projection and cast a shadow on the wall which would be detected and then the data points would appear around that shadow, but we soon came to realise that this wouldn’t work so we had to iterate and we had to fix this by changing the overall setup. Below are the schematics (attempt) on the orientations we explored.

We figured out that the second orientation works best and started working with that goal in mind, while all this was happening our most important objective was to also find a good place to setup our exhibit in and we were scouting for places all over the campus. 


With all of this happening we wanted to check if we could actually use our own images, icons and background on the actual projection and we had to make some heavy changes to the code for this but eventually we figured it out with the help of our friendly neighbourhood chatty GPT and a few other sources on Github.

The Setup

Now it was time for us to develop the actual interface with a good browser and a bad browser. We made background images, created ads and data point icons based on our research, and classified them across the two browsers. The prototype exhibition was coming up, and we had to pull our socks and get our place, projectors, and cameras set up. Everything had to be done in just two days. We scrambled, staying up until 2 in the night fixing the final code and getting it to work. Finally, we had it—the setup and the whole thing up and running. It felt great to actually walk through this place and see our shadow being trailed by so many things. Link to video

Projection and Detection Unit

Projection and Detection Unit

zeroes and ones

Our Code

Our first piece of code that was tests (Didn't work that great)

import gab.opencv.*;
import processing.video.*;
import org.opencv.core.*;
import org.opencv.objdetect.*;

Capture video;
OpenCV opencv;
HOGDescriptor hog;

void setup() {
  size(640, 480);
  video = new Capture(this, width, height);
  video.start();
  opencv = new OpenCV(this, width, height);
  
  // Initialize the HOG based pedestrian detector
  hog = new HOGDescriptor();
  hog.setSVMDetector(HOGDescriptor.getDefaultPeopleDetector());
}

void draw() {
  if (video.available()) {
    video.read();
    opencv.loadImage(video);
    detectHumans();
  }
}

void detectHumans() {
  // Detect humans in the frame
  MatOfRect foundLocations = new MatOfRect();
  MatOfDouble foundWeights = new MatOfDouble();
  hog.detectMultiScale(opencv.getGray(), foundLocations, foundWeights);

  // Draw a white background
  background(255);
  
  // Draw the video frame with detected humans outlined in red
  image(opencv.getInput(), 0, 0);

  // Draw red outlines around the detected humans
  noFill();
  stroke(255, 0, 0);
  strokeWeight(3);
  for (Rect rect : foundLocations.toArray()) {
    rect(rect.x, rect.y, rect.width, rect.height);
  }
}

Our final code

import gab.opencv.*;
import processing.video.*;
import org.opencv.core.*;
import org.opencv.imgproc.*;

import java.awt.Rectangle;

Capture video;
OpenCV opencv;
PImage[] images = new PImage[4];
PImage backgroundImg;

void setup() {
 size(1280, 800);
 video = new Capture(this, width, height);
 video.start();
 opencv = new OpenCV(this, width, height);

String[] imagePaths = {
 "/Users/eshwarvenkatesan/Downloads/Good Broswer Bad Browser/Bad/tag1.png",
 "/Users/eshwarvenkatesan/Downloads/Good Broswer Bad Browser/Bad/tag2.png",
 "/Users/eshwarvenkatesan/Downloads/Good Broswer Bad Browser/Bad/tag3.png",
 "/Users/eshwarvenkatesan/Downloads/Good Broswer Bad Browser/Bad/tag4.png"
};

for (int i = 0; i < images.length; i++) {
 images[i] = loadImage(imagePaths[i]);
}

 backgroundImg = loadImage("/Users/eshwarvenkatesan/Downloads/Good Broswer Bad Browser/Bad/BB.png");
}

void draw() {
 if (video.available()) {
   video.read();
   opencv.loadImage(video);
   detectShadow();
 }
}

void detectShadow() {
 opencv.gray();
 opencv.invert();
 int threshold = 200;
 opencv.threshold(threshold);

 ArrayList<Contour> contours = opencv.findContours();

 // Display the background image
 image(backgroundImg, 0, 0, width, height);

 // Draw the detected object filled with black
 fill(0);
 noStroke();
 for (Contour contour : contours) {
   beginShape();
   for (PVector point : contour.getPoints()) {
     vertex(point.x, point.y);
   }
   endShape(CLOSE);
 }

 Rectangle combinedBounds = null;
 for (Contour contour : contours) {
   if (combinedBounds == null) {
     combinedBounds = contour.getBoundingBox();
   } else {
     combinedBounds = combinedBounds.union(contour.getBoundingBox());
   }
 }

// ... (rest of the code remains the same)

// ... (rest of the code remains the same)

if (combinedBounds != null) {
 noStroke();

 // Set the constant size of the images
 float imgWidth = 50;
 float imgHeight = images[0].height * (imgWidth / images[0].width);

 // Define scale factors, horizontal and vertical offsets for each image
 float[] scaleFactors = {3, 1, 3, 2};
 float[] offsetsX = {-20, -20, -10, -30};
 float[] offsetsY = {0, 10, 20, 0};

 // Display three images on the left edge of the bounding box
 for (int i = 0; i < 2; i++) {
   float x = combinedBounds.x - imgWidth;
   float y = combinedBounds.y + (combinedBounds.height / 2.0) * i;

   // Manipulate the size and location of each image
   image(images[i], x - offsetsX[i], y + offsetsY[i], imgWidth * scaleFactors[i], imgHeight * scaleFactors[i]);
 }

 // Display three images on the right edge of the bounding box
 for (int i = 2; i < 4; i++) {
   float x = combinedBounds.x + combinedBounds.width;
   float y = combinedBounds.y + (combinedBounds.height / 2.0) * (i - 2);

   // Manipulate the size and location of each image
   image(images[i], x + offsetsX[i], y + offsetsY[i], imgWidth * scaleFactors[i], imgHeight * scaleFactors[i]);
 }



// Draw a fixed number of green circles, space them out more, and create clusters
int numCircles = 30; // Number of green circles to draw
int circleRadius = 5;
int clusterRadius = 30; // Radius of the clusters
float circleSpacing = 5; // Spacing between the circles
fill(0, 255, 0);
noStroke();

for (int i = 0; i < numCircles; i++) {
 // Randomly pick a contour
 Contour contour = contours.get((int)random(contours.size()));

 // Randomly pick a point from the contour
 PVector point = contour.getPoints().get((int)random(contour.getPoints().size()));

 // Calculate the random angle and distance for the circle
 float angle = random(TWO_PI);
 float distance = random(clusterRadius);
 float offsetX = cos(angle) * distance;
 float offsetY = sin(angle) * distance;

 // Calculate the position of the circle with spacing
 float x = point.x + offsetX * circleSpacing;
 float y = point.y + offsetY * circleSpacing;

 // Draw the green circle
 ellipse(x, y, circleRadius * 2, circleRadius * 2);
}
}
}

The Final Outcome

Learnings and Reflection

Working on this was a truly rewarding experience, looking at how people felt when they entered an immersive experience made me feel really happy about the effort that went into this project. I was able to learn a lot about privacy and the power Internet hold to both make or break someone's life. This quest piqued my interest in the area of ethical technology and has pushed me to ensure that I think about the impact of technology on my users before I choose to deploy it for them.

I also had the realisation that ChatGPT can code, pretty well might I add, but if I would do anything differently about this project it would be about how much more impactful I would make this immersive experience, maybe using depth sensors instead of cameras, and maybe actually creating an even crazier alter ego of the people who step into the experience.

All in all I had fun. Until next time!