All right.
A new study finds that big tech tracks
you even when you've opted out.
Cal.com will no longer be open source and
some big developments in US and EU privacy
and surveillance.
All this and more coming up on This
Week in Privacy number forty nine.
So stay tuned.
Welcome back to This Week in Privacy,
our weekly series where we discuss the
latest updates with what we've been
working on within the Privacy Guides
community and this week's top stories in
data privacy and cybersecurity.
I'm Jordan,
and with me this week is Nate.
How are you, Nate?
I'm good.
It's been a busy week,
but I guess I can't complain.
How are you?
Yes, also a busy week,
but now let's jump into the biggest news
in privacy and security from the past
week.
So this story here from four or four
media, Google, Microsoft meta,
all tracking you, even when you opt out,
according to an independent audit.
Uh,
an independent privacy audit of Microsoft
meta and Google web traffic in California
found that the companies may be violating
state regulations and racking up billions
in fines.
According to the audit from privacy search
engine web x-ray.
Fifty-five percent of sites it checked set
ad cookies in a user's browser even if
they opted out of tracking.
Each company disputed or took issue with
the research,
with Google saying it was based on a
fundamental misunderstanding of how its
product works.
So this company itself, WebXRay,
they viewed web traffic on more than seven
thousand popular websites in California in
the month of March and found that most
tech companies ignore when a user asks to
opt out of cookie tracking.
And this is specifically concerning
because California has privacy
legislation,
thanks to its California Consumer Privacy
Act, which allows users to,
among other things,
opt out of the sale of their personal
information.
And there's basically a system called the
Global Privacy Control,
which is basically a...
In some browsers,
it's a switch and in some other browsers,
it's an extension that you have to
install.
According to the Web X-Ray audit,
Google failed to let users opt out of
eighty seven percent of the time.
Google's failure to honor the GPC opt out
signal is easy to find in network traffic.
Now,
I think this is kind of always been
a
concerning thing right there's these
opt-out signals and we're not really sure
how effective they are right because a lot
of these signals themselves right they are
often ignored like we saw the do not
track signal that used to be kind of
big right um that was also ignored by
a lot of websites and now we're also
looking at this new thing which is
GPC so a lot of companies basically argue
that they're not really sure what this
means and they're just gonna track you
anyway which is kind of silly right so
you know it's not really that surprising
to see that so many websites don't comply
with this did you have any thoughts on
this Nate I feel like this is
unfortunately kind of like I assumed this
was kind of going on so
Yeah, I mean it's um...
I don't know.
I do have a I mean,
I always have thoughts on things.
I mean, first of all,
it's it's I do want to point out,
well, OK,
to assume this is always going on.
I agree with you.
But I do want to know a couple
of things that GPC is supposed to be
an improvement over do not track because
GPC is actually legally recognized under
certain privacy laws like the California
Consumer Privacy Act, for example.
So websites are required to honor it.
So.
I'm with you.
When this was initially announced,
GPC specifically, I was kind of also like,
I don't know,
why would companies listen to this?
Like they already don't listen to things.
But I was also kind of hopeful because,
again, it is like legally required.
And we have seen in the past that
typically companies will – and I'll touch
on this in a second in the article.
But like companies do –
they kind of like to ignore things right
up until they get caught.
And then they're like, ah, okay,
you got me.
We'll play along.
Or, you know,
they'll at least start to play along.
Usually it's,
they kind of have to get caught a
few times,
but they get caught and they change what
they do.
And, um, so I don't know.
I,
I guess I was hoping that maybe this
would go somewhere and it still might if,
if that's what happens here.
But, um,
I think the real issue here,
and this person that they interviewed from
WebXRay kind of talked about this.
Oh, where did it go, actually?
Okay, yeah.
So this person who used to work at
Web, or excuse me, Timothy Liebert,
who founded WebXRay,
he used to work at Google.
And
He said that he told four or four
media he felt his job at Google was
to protect its users,
but his bosses didn't agree.
And he left the company in twenty twenty
three to start Web X-ray.
And this is a quote from him.
Shortly before I left,
my boss told me direct quote.
My job is to protect the company.
There was another time I got into a
very serious ontological discussion with a
fairly senior engineer about what the
difference was between taxes and fines.
And they didn't understand there was a
difference.
And I think this is something that a
lot of people in the privacy space have
noticed is I think the issue here is
these companies,
the fines are not really fines.
They're just cost of doing business.
Like I remember,
I wish I could remember which story it
was,
but Meta got in trouble for something and
they got issued a fine.
And the article kind of openly said –
like they didn't make a point of saying
it.
It was kind of just like a real
quick sentence that if you weren't paying
attention, you wouldn't even notice it.
But the article basically said like,
oh yeah,
Meta is going to contest the fine because
basically it's bigger than they thought it
would be.
Like they don't even care that they got
fined.
They don't even care that they're wrong.
They're just like, no, no, no.
We set aside a certain amount of money
to pay these fines, quote-unquote fines,
which are really just a cost of doing
business and a tax like he said.
But it's more than we budgeted for,
and that's why we're going to fight it.
It would be like if you got up
to the register and you're going to buy
– like you go to the grocery store,
the corner store, whatever,
and you're going to buy a soda.
And they're like, oh, it's five dollars.
It's like, whoa, whoa, whoa.
Hold on.
I have five dollars,
but this should only be three dollars or
maybe it is five dollars now with
inflation.
But you know what I mean?
It's like it's not even that I don't
have the money.
That's not how much I set aside for
this thing,
and that's basically how these companies
treat it.
Yeah.
I think to give a little bit of
a benefit of the doubt,
I think it's tricky to find these
companies sometimes in the sense that
like,
These big tech companies like Meta,
Microsoft, Google,
you want to be able to levy a
fine that's going to hurt them and going
to make them pay attention and stop doing
this crap.
But at the same time,
you need to write the laws in a
way where it's like privacy guides,
for example.
It doesn't wipe us out if we –
not that we do any of this stuff,
but if we make a mistake somehow and
we're accidentally collecting something we
didn't know, I don't know,
just throwing it out there.
A fine that is one percent of Meta's
hourly revenue would wipe us out.
And so you want to find that ground
where you're not destroying the small
guys, but at the same time,
you're still hurting the big guys.
And I do have some sympathy for that.
But at the same time,
I feel like as far as I know,
there's no laws being weighed right now or
suggested.
So, I mean,
it's not like they're really trying to fix
that.
But that's really the problem is just that
the penalties for doing this stuff are.
just a cost of doing business.
And yeah, I don't know,
really unfortunate, but I guess.
I think, oh, sorry.
No, go ahead.
I think like I do think it's interesting
what you said is the the the stuff
about finding a company based like,
you know,
if you don't want to destroy all the
small companies,
I feel like we could kind of get
around that.
Maybe if we had like, you know,
proportional of their earnings,
like based on their profits or revenue,
maybe.
But, you know,
obviously I feel like our governments are
too
captured by these big tech companies and
lobbying and all that sort of stuff um
so it's probably not super likely but I
think you know having a fine that is
actually proportionate to how much they
make because I don't know I guess that
might be hard to argue from a from
a um a damage perspective right like if
they were if they were collecting let's
say
all of Americans data that wouldn't even
be that much of the entire globe right
for meta because meta has like billions of
users so um it'd be hard to say
like just uh in California you know even
less so um I feel like it might
be hard to argue the fine being so
large I guess um but it's still a
problem uh I don't know what the
answer is but I do think they need
to be fined more especially for not um
complying with this stuff but it's good
though I didn't realize the GPC stuff
actually was um related to like legal
stuff like there was a legal precedent
behind it so that's good um but I
guess it's like you said it's kind of
the cost of doing business for these
companies
Yeah, it's very – like I said,
that was kind of what gave me hope
for it because when I first heard about
it too, I'm like, we already did this.
What's different this time?
But it's the legal enforcement.
But it's – I don't know.
The street proportional thing,
I have mixed opinions on because we'll
take – if my project –
I'll just say it.
Like the new oil,
I published transparency reports.
I made twenty thousand dollars last year,
which was by far the most I've ever
made.
And so let's say a ten percent penalty.
Right.
That's two thousand dollars.
I don't have that in the bank right
now.
Most of that money has been spent on
various things.
But, you know, ten percent,
two thousand dollars for me is a lot
of money that would wipe me out.
Whereas for Meta, you know,
ten percent of their what,
ten trillion dollars they made last year
or whatever.
I don't know.
I could look it up.
But you know what I mean?
Like ten percent for them,
they're already paying four percent.
They don't care.
Like it's just it it doesn't scale the
same.
You know,
a person who's making one hundred thousand
dollars a year and gets a ten percent
speeding ticket.
To them,
that's just a much smaller amount as
opposed to a person who's making forty
thousand dollars a year.
So, I mean, I hear you like that.
I just feel like that's not really.
I feel like that's not really a
sustainable solution.
Personally, I could be wrong, but.
I don't know.
It's tricky.
But yeah,
I think that is the solution is until
we like get some kind of better legal
enforcement,
I don't think these companies are going to
stop doing what they do.
And I think the,
what was his name again?
This Liebert, this Timothy Liebert,
he even said that too in this article.
But I think,
I guess a question for you,
what would you recommend?
Because I mean,
I think the solution here is
In my opinion would be that we,
we kind of, you know,
a lot of people argue that laws don't
work and that we need to force companies
to respect our wishes.
And I think they kind of got a
point with this kind of stuff.
Like,
I don't know if I'd go so far
as to say laws don't work,
but I think we need to do what
we can to force companies to respect our
wishes regardless.
Right.
I mean, I think it's,
I think people get too caught up in
black and white thinking, right?
Like you can definitely use the laws as
well as doing things to protect yourself,
right?
Like I wouldn't like just use a browser,
not harden it at all,
share as much information as possible,
rely on this like somewhat nebulous global
privacy thing.
Like, you know, I think it'd be,
a thing that you could use that along
with like a fingerprint resistant browser
to protect yourself a bit more, um,
not share as much information,
use email aliases,
secondary phone numbers,
stuff like that to protect yourself.
Um, because yeah, a lot of cases,
I'm not really sure if this global privacy
control thing is
to be respected like this company said but
i also think it's kind of interesting that
this person um i think you read earlier
that they were um
part of the Google team working on like
the cookie compliance stuff.
I think it's like,
I'm not entirely sure what they're
expecting.
Like you go to work at the largest
data collector in the world and you expect
them to care about like respecting
people's privacy.
I'm not really sure.
I mean,
I guess maybe you could make the argument
that like you are trying to change it
from the inside, but like,
I'm not like friends don't let friends
work at like big tech corporations.
Like let's, let's, you know,
it's not a great idea.
Yeah.
I, um, first of all,
I think that was a great answer with
the black and white thinking.
I think you're right.
Um, I'm,
I'm a real big fan of using multiple
approaches.
Right.
So like, yeah,
you should use a hardened browser and,
and Tor VPN, all that kind of stuff,
but also.
we should push for better privacy laws and
stuff like that.
So fantastic answer.
Thank you for saying that.
But yeah, I mean,
as far as this guy specifically,
I don't know when he started at Google.
So it could be like, you know,
there was that book, Careless People,
that was written by the lady,
Sarah Wynn Williams,
that used to work at Facebook.
And to be fair,
she got there back in like, what,
two thousand...
like eight or something like back when,
when Facebook was still like had the
potential to be good and she kind of
watched it become the cancer that it is
now.
So I don't know,
like if this dude had been there from
the start back when Google used to say,
don't be evil.
And back when you Google stood up to
China and all that kind of stuff,
then I could kind of see, but yeah,
I feel like, um,
I feel like with these big tech companies
these days,
you kind of have to hit a point
where you're just like,
they're not going to change.
Like, you know what you're getting into.
And I don't, I don't, I have, um,
I feel this way about a lot of
systems.
I want to be careful how I say
this,
but I feel like there's certain systems
around the world that just kind of like
I don't know.
I'm kind of cynical on if they can
be changed.
It's like you either get corrupted and
become part of the problem or you get
forced out because you refuse to fall in
line because you're trying to make things
better.
And unfortunately,
I think big tech is one of those
systems that nine out of ten times or
ninety nine out of a hundred times.
It just it is what it is.
And it's hard to change.
It's an uphill battle.
So.
But yeah.
Yeah,
but I think like to talk a little
bit more about like using those both
angles on this approach, like, you know,
trying to enforce these privacy laws.
We do have an activism section on our
website now.
So
You can check that out at
privacyguides.org slash activism.
There's some stuff in there about like how
to contact your, actually, I'm not sure,
it's not live yet,
but there was a section in the works
for contacting your data protection
authority.
And there's also a lot of bunch of
tips on there about, you know,
all sorts of things to basically build a
movement behind trying to get better laws
passed and stop people from
you know,
stop politicians from passing these
terrible laws.
So I think that's also important.
But like,
you can do multiple things at the same
time.
And I think that's kind of why people
get kind of confused.
They'll be like, oh,
these laws are like always getting
bypassed.
They're so useless.
It's like, well,
there are some laws that have done
something.
Like we can all argue that the GDPR
has
had an impact, right?
Like the right to delete has become a
lot more common since the GDPR came around
and that used to be such a pain
to like delete your information from
websites.
And it's had an effect even outside the
EU as well.
So I think, you know,
there's definitely examples of things that
have worked pretty well.
So it's just a matter of advocating for
better legislation.
I think it's definitely possible that
there's a lot of bad stuff right now,
especially with the age verification
stuff.
I think in large part it's just because
people aren't getting riled up enough
about it.
I'm sure the politicians would probably
change their mind if people were
protesting outside parliament or outside
your government buildings.
So I think there's certain things we can
do to sway people on it.
But yeah, that's sort of my thoughts.
Yeah.
Agreed.
It's,
it's politicians are at the end of the
day,
we'll do whatever keeps them in power.
So if something proves to be wildly
unpopular,
they're going to find a way to walk
it back.
I nine, not a hundred times.
So.
Um, real quick,
before we move on to the next story,
uh,
Jonah gifted five privacy guides
memberships on YouTube.
So if you're on YouTube and you're kind
of like listening to us in the background
or something, uh, check that out.
You could,
you could get a free membership trial and,
uh,
get some early access to some upcoming
videos.
So thank you, Jonah.
But, uh,
if that's all we've got on that story,
um,
we're going to move and next we're going
to talk about Mastodon and, um,
Mastodon,
I think most of our listeners probably
know.
I think most of you guys,
or not most of you guys,
but I think a lot of you are
probably currently Mastodon users or have
used Mastodon in the past.
Let us know if you are a Mastodon
user.
One in the chat for yes,
two for no.
But in the meantime,
we're going to talk about some upgrades.
Mastodon got a grant from the Sovereign
Tech Agency Fund.
And so the Sovereign Tech Agency is
something from Germany.
I pulled up the Wikipedia page here.
And basically,
it's a part of the German federal
government.
It's part of their budget that aims to
promote and secure open source
foundational technologies.
It tries to make the open source ecosystem
more resilient against external attacks,
thereby enhancing cybersecurity and
resilience across the German economy.
And so in the past,
they funded things like, let's see here,
Arch Linux with, oh my God,
over half a million euros.
That's crazy.
FFmpeg, FreeBSD, GNU, GNOME.
Oh, my gosh.
All kinds of stuff.
Open street maps, open SSH, PHP,
so on and so forth.
WireGuard.
Yeah, really, really cool stuff there.
So now they have donated to Mastodon.
They've awarded six hundred and fourteen
thousand euros
And out of that total,
ninety thousand has been set aside to be
shared with other Fediverse projects that
choose to implement the protocols
developed during the work,
which we are about to talk about.
So we did write an article or Freya
wrote an article about this for privacy
guides earlier this week and focused
specifically on the on the end to end
encryption,
which I will get to that in a
moment.
But there's a lot more in here,
although that is certainly one of the more
exciting features.
So there's blockless synchronization.
I know historically that's been
A bit of a problem on Mastodon is
moderation.
A lot of people, I'm told...
There's a struggle, right?
And I don't want to get too philosophical
right off the bat,
but there's a struggle between...
We want...
freedom of speech.
And we want people to have a space
where they can say whatever they want,
even if we don't agree with it.
But also some people maybe just don't want
to do that, right?
Like someday I have days where I know
I need to like not check the news,
because I'm just so tired and so mentally
exhausted.
And I'm like, dude,
I'll check it tomorrow.
Now's not the time.
And so I understand some people may want
like an account, for example,
where they can go and just not see
anything political or whatever.
And but the point is,
it's sometimes been a challenge,
especially for people who are new to like
open source technology,
like maybe back when
Elon bought Twitter and a lot of people
were checking out other alternatives.
You know, some people were like,
the moderation is difficult and I'm seeing
a lot of stuff I don't necessarily want
to see.
And that's been a thing.
And so now that's one of the things
they're working on is enabling Mastodon
server administrators to subscribe to
shared block lists,
which this is totally optional.
I think one of these days I floated
the idea of we do want to do
a Mastodon tutorial,
like how to self-host Mastodon,
your own instance.
And I definitely got the thumbs up from
Jonah.
We just haven't gotten around to that one
yet.
That one's in the works.
We have a lot of great ideas for
videos, but...
Anyway,
so that's one of them is a blockless
synchronization.
Remote media storage.
This is more behind the scenes stuff,
but it'll just make it easier for server
administrators.
They won't need to have quite so much
storage on hand.
Mastodon hasn't been too crazy for me,
but also my instance is a lot smaller.
So yeah.
In regards to the spam thing again,
they have automated content detection,
which is specifically for like spam or
illegal materials.
I...
I'll come back to that one actually.
End-to-end encryption, I mentioned that.
So they're going to use,
I believe it was messaging layer security,
MLS.
I believe I read that in Fria's write-up,
but I apologize if I'm wrong about that.
But yeah,
they're going to add end-to-end encryption
to DMs,
which is great because that has
historically been
one of the negatives of mastodon is the
dms are not encrypted and a uh an
administrator could still look at your
messages if they really wanted to they're
going to improve the documentation and i
believe they said they're trying to get
most of this stuff done by the end
of the year and then again there's that
ninety thousand that's bookmarked to help
other instances or other projects that
want to take advantage of this so maybe
someday we'll see end-to-end encryption
between like mastodon and pixel fed for
example or something like that so
I think that's super cool.
The last thing I wanted to add is
this automated content detection.
I could see the argument,
and this is just me kind of thinking
out loud.
I could see the argument where like,
we're not usually fans of this, right?
Because how long does it take?
You know,
maybe illegal material for now means like,
Um,
child abuse material or like in Iceland,
I found out in Iceland,
technically adult material is illegal.
I don't think anybody actually enforces
it,
but let's say you wanted to err on
the soft side or err on the side
of caution and say like,
I just want to block anything that's
adult, right?
You, you could use this for that.
I could see how it could get a
little bit tricky if there starts to be
some kind of pressure to scan for, um,
protests or something more political,
but also at the same time,
that's one of the beauties of things like
Mastodon, right?
Is if you start to feel like this
instance is getting a little bit too
heavily moderated in a way I don't like,
you can move to another instance or you
can self-host your own instance.
So I think that's definitely one of our
favorite things about the Fediverse.
Jordan,
was there anything in this announcement
that you jumped out that caught your
attention or you thought was interesting
or wanted to talk about?
Um, yeah,
I think the block list synchronization
thing is definitely going to be somewhat
controversial.
Like you said, like there was,
I think I've talked to people about this
a decent amount, but like, you know,
people kind of get frustrated that there's
almost like censorship in quotations of
like, you know, certain people, um,
And I think that, you know,
a lot of times,
maybe sometimes that can be the case,
but I think the biggest thing here is
the.
um, you know, uh, the, the,
the small server operators who don't have
a lot of time.
So maybe, you know,
like I know you run your own master
on instance,
I'm sure that can sometimes be kind of
frustrating to see, like, you know,
CSAM and like awful stuff popping up.
Um, because, you know,
a lot of administrators are basically
having to take care of that themselves.
Um, so, you know,
kind of offloading that a little bit to,
allow that to be a sort of community-based
effort is a decent way to go,
I think.
But I think, you know,
some people are still going to have a
problem with this because, you know,
it can kind of make things become a
bit like group-thinky, I guess,
where everyone is sort of
blocking people based on,
I know it's not very common,
but there are a couple of instances that
have just like de-federated with other
ones because of, you know,
beef that they have with each other,
which is, you know,
it happens on every platform.
I think people are like that.
So I don't think it's really,
an issue with Mastodon specifically,
but I do think it's still in a
better spot because even if you find that
to be an issue,
you can start your own instance or you
can just join one that doesn't have those
restrictions.
But I do think it could be better
to make it more obvious what information
is being blocked to users of your
instance,
because a lot of times it's not exactly
clear
what block list, I mean,
I hope it's clear once this gets
implemented,
but also just like being able to see
what is actually blocked by the server so
you can make a better choice if you
prefer to join a server that doesn't have
as many blocked things.
But yeah, all the other stuff seems
reasonably interesting i think i'm not
really a big fan of like the automated
content detection but i think i guess it's
kind of needed once the network gets to
a certain point do you have any thoughts
Yeah,
I think the automated detection thing,
I think it's a blessing and a curse
because like I said,
there is the one argument of like,
this is the same thing that we would
criticize like Apple or Google for, right?
But at the same time,
I think historically,
Mastodon has had a huge problem with spam.
And...
a lot of that, I mean, this is,
there's,
there's pros and cons to decentralization.
Right.
And that's one of the cons is like
their entire servers out there that are
just like abandoned.
Like,
I don't know why the owners are still
paying for server space,
but apparently they are.
And they've got open registration and,
And I've seen this happen a few times.
I've been on Mastodon long enough that
I've seen this happen more than once,
where for some reason,
a whole bunch of bots will just go
and join this one instance that's like six
versions out of date.
The admin is clearly checked out five
years ago and registrations are still
open.
And so everybody,
they send their bots and the bots start
harassing everybody and posting spam.
And usually it's in another language and
it's like,
links to gambling sites or another common
one that goes around is like,
this is Mastodon support.
You need to verify your profile,
which I hope most Mastodon users are too
tech savvy to fall for that.
But at the same time,
I think there's a,
not to get too in the weeds here,
but I think any sort of a platform
needs to have a philosophical question of
what's their end goal.
Because I think if your goal is to
be like, oh,
all of our users are too tech savvy
for that.
They're not gonna fall for that.
Then you don't really need to worry about
weeding out the spam, right?
Like at this point, it's buyer beware.
You're expecting your users to have that
level of tech savviness.
But if you want something to be,
what's the word I'm looking for?
If you want something to be accessible to
everyone and to gain mainstream traction,
then these are the things you have to
think about.
And so I would certainly appreciate some
better moderation tools.
I have my instance set to approval.
I have to approve everyone.
I usually do,
unless I think it's an AI bot,
which they're usually pretty easy to spot.
But if you're a real user,
I don't care why you're here.
But I think...
I can't help it when other people are
spamming, right?
And, you know,
I can't be on Mastodon to manage that.
So it is kind of annoying.
Yeah, I don't know.
It's got pros and cons.
Although again, like I said,
with the whole,
we would criticize Apple and Google for
this, Mastodon's decentralized.
You know,
the US government could theoretically come
up to me and be like, hey,
you need to start blocking, I don't know,
anything from anything in Arabic because
we're beefing with Iran right now, right?
But alternately, if you're,
if you're a German instance,
like the U S government has no power
over you.
So it doesn't,
I wouldn't go so far as to say
it doesn't matter, but it's,
it's a lot harder for that kind of
censorship to like really take hold,
which I think is,
is an advantage for sure.
But,
Yeah.
And then I've,
I've also got thoughts on the free speech
thing, to be honest, but I'll just,
I'll leave that there.
Um, like you said, the advantages,
you can always just go start your own
instant,
which Mastodon is one of the more
user-friendly things that I've looked into
hosting.
It's certainly not,
I wouldn't describe it as like your first
project.
I think there's definitely easier things,
but it's easier than next cloud for sure.
Um, it's,
it's definitely easier than a lot of
other, um,
a lot of other projects in my opinion.
So
Yeah,
and I do think it is good with
Mastodon because if you disagree with any
of these things,
like if you don't agree with blockless
synchronization, that's fine.
You can use like any other Fediverse
system, right?
There's loads of other ones you can use.
You don't have to use Mastodon.
I just think it's the most popular or
one of the most popular, I guess.
I think so, yeah.
So that's kind of why it has...
most features it's the most feature rich i
guess and this is just kind of adding
to that um it is interesting here i
did notice the timeline for the end-to-end
encryption for private messages is twenty
twenty seven and i just think you know
we're gonna be on the side of don't
use that so we don't really want that
but like i mean i don't really think
that's that important i think you know if
you people already i already see people
doing this on maston but they link their
signal account
We would suggest that much more than going
and using end-to-end encrypted private
messages.
Yeah,
I think actually somebody here did
mention, yeah, on YouTube,
Seismic said finally,
and chat besides Signal that I can use.
I mean,
we're going to have to wait and see
what this looks like in the final version.
I highly doubt it's going to be something
that we would recommend over Signal or
even alongside Signal.
But I always think it's great to have
more protection wherever possible.
And I think it is really good that,
because, you know, there may be,
times that I want to message somebody.
And especially in, you know,
one of the problems that a lot of
these, uh,
decentralized services have is there tend
to be like one or two or a
handful of servers that get like a massive
amount of users.
And so it's a lot of people criticize,
they're like, well,
it's not really decentralized because
everyone's using that server.
Um, but regardless, you know,
it's still like,
if you're talking to somebody,
like if I message somebody,
there's a good chance they're going to be
on like the mastodon dot social.
Right.
And so maybe I'm comfortable telling that
person like my date of birth.
but I don't want to tell everybody.
And I don't know who the admin is.
And I don't necessarily know if I trust
the admin and, and, you know,
some Mastodon instances even have like an
admin account where there may be more than
one person that has access to it.
So I think it is really good that
they're adding this level of privacy,
but yeah, I don't,
I doubt it's going to be implemented in
a way where we're like, well, shoot,
this is just as good as signal.
Everybody just use that, you know,
but it's still nice to have that extra
layer of protection for sure.
So yeah, that is a long ways off.
So
Definitely.
I think, yeah, you're right.
I think it is important, like you said,
to have more than just have everything be
have some level of protection rather than
nothing.
Right.
Definitely agree.
And I also saw real quick, somebody asked,
why are people chatting numbers?
We were running a poll.
I think it got moved off when I
started showing comments,
but we were running a poll about
if you were a Mastodon user or not.
And so you would comment in the chat,
one for yes and two for no.
But we'll try another poll in the future.
So I think for now,
that's all I've got on that story,
if we want to move on to the
next one, unless you have final thoughts.
Awesome.
Yeah, no,
I think we kind of talked about that
quite thoroughly here.
So let's move on to the next story
here.
And this one has been kind of a
hot story this week.
Cal.com is going closed source.
Here's why.
So I guess first, you know,
I think a lot of people in our
audience may not be familiar with this if
they're not really into like
uh, meeting scheduling,
self-hosting meeting scheduling sort of
stuff.
So basically cal.com was, uh, well,
it is still a thing, right?
Um,
there's basically a way to organize
meeting times with people.
So you could send someone a link and
it would have your availabilities.
And then the other person could select the
time that works best for them, which,
you know,
Personally,
I've had to do that because we communicate
across time zones now.
This is like a global economy.
So people have to sort of find the
best time.
And that is oftentimes across different
time zones.
So, um, the,
the thing here with cow.com is they have
decided to move to going closed source.
So originally there was a,
they had a self hosted version.
And I think the whole thing with that
was that it was a full, uh,
it was a full open source version of
their, uh,
service that you could self host yourself.
And basically they've announced that they
are diverging from that project.
And they have been for some time now
they've actually been working on a closed
source version.
Um,
and that's the version that runs on
cal.com and they have introduced a new
service, which is cal.diy, which is,
self-hosted version and i do want to talk
a little bit about that but first let's
kind of talk about the reasoning behind
going to this closed source model so they
posted a video here um saying that ai
is killing open source stating that you
know open source vulnerability scanners
are basically making it really hard to
keep up with patching vulnerabilities
because, you know,
they're able to scan the software and find
vulnerabilities much easier than spending
hours and hours as a, you know,
professional hacker or whatever, you know,
like a threat actor,
a proper threat actor.
They can kind of find these
vulnerabilities without
Being that technical is what I'm trying to
say.
So basically that's kind of their
reasoning behind this.
Their reasoning for moving to closed
source is security.
And I think that's kind of where we
kind of fundamentally disagree with this
because I think the source model of your
software doesn't actually have an impact
on security, right?
There's still ways to, you know,
analyze software that is closed source.
There's still ways to, you know,
test software, crash software,
find vulnerabilities.
So that's an interesting take.
I think
One thing that Jonah brought up, you know,
we have like a staff group chat,
he brought up that the cal.diy project
looks kind of sus.
If you go to the website cal.diy,
there is actually a lot of warnings all
over the page,
which kind of makes it seem like they
may not be following,
they may not really be updating this.
This seems like, you know,
something that is sort of
risk,
they're kind of making it seem like it's
extremely risky to use.
Um, so this is kind of strange.
I think, uh,
I don't really know why they're have such
a large warning on like every single page
or like at the top of the introduction
page, um, saying use it.
Your own risk is open source community
edition and is tended for users who want
to self host their own CalDIY instance.
It's strictly recommended for personal.
non-production use please review all
installation blah blah blah like it's it's
quite um strange uh and it says um
below that there's like an ad for their
um for their commercial service which is
closed source now um so you know we
we've always kind of been
are saying that, you know,
there's the source model I don't think has
an impact on the privacy or security.
And yeah,
like Jonah said in the chat here,
literally fearmongering about open source
actually.
Like this is like the silly arguments that
we hear from like people who don't really
know what they're talking about and who
say like,
open source that's like so much worse
because like then everyone can see the
code and like hack you it's just like
not really it just it just means there's
more scrutiny um and i think you know
it doesn't really make that much sense um
to do this it's we kind of talked
about this a little bit in our group
chat but the the the this company itself
cal.com is venture capital backed and
basically what that means is there's
people who
invest money in the company to, you know,
gain a stake in the company, I guess.
And they want to be able to earn
a return on that money that they've
invested.
And in a lot of cases, you know,
open source software opens the company up
to having their ideas and direction
possibly copied by a competitor or to
allow insights into their company from a
competitor.
And I think,
This is kind of a little bit silly.
I think, you know,
if you're making a really good product,
which I think cal.com is making a really
good product,
then you shouldn't be concerned about
someone stealing your ideas.
Like I'm,
I'm kind of not very familiar of that
ever being the case.
Um, I think it keeps the,
it keeps your company kind of, uh,
You don't even like you can have an
open source license that doesn't allow
people to use it for commercial use.
You can still have the software be open
source.
You can have the source available source
code.
So that's why I'm kind of confused by
this.
this move here but I did want to
hand it over here to Nate because there
was actually a bit of a clap back
here from discourse which is basically the
forum software that we use for our forum
but I'll just hand it over to Nate
here to kind of tackle that
Sure.
Um, yeah.
So quick shout out to our forum,
discuss stop privacy guides.net.
Uh, we are powered by discourse, which,
um, I,
it seems like a nice piece of software
as far as I can tell.
I haven't had to deal with it behind
the scenes.
Jonah does all our, our hosting,
but it seems to work pretty great.
And, um,
I mean, I'm not going to mince words.
This was absolutely,
like a clapback was a good way to
put it.
This was a response.
But I want to give a shout out
to Discourse because to me,
this felt like a very,
it was very direct.
It was not watered down.
but it was also not overly aggressive or
unprofessional.
And I feel like I don't see that
a lot of days,
a lot of the time these days,
and I just really appreciate that.
So thank you, Discourse.
This did not pull any punches,
but was also...
I don't know, just very professional,
in my opinion,
as professional as calling somebody out
can be.
But yeah,
so discourse literally said discourse is
not going closed source,
which I think the cal.com was cal.com is
going closed source.
Yeah, that was a direct quote.
And basically,
they kind of said everything that Jordan
said, actually, which is, you know,
they said here that like,
their reasoning is that AI has made open
source too dangerous for software as a
service companies,
codes get scanned and exploited at
buy AI at near zero cost.
Actually real quick before I dive into
that,
the cal.com one did have one statement
that I did want to kind of sympathize
with them a little bit.
So they talked about in recent months,
we've seen a wave of AI security startups
productizing this capability,
which they're talking about scanning the
source code.
Each platform surfaces different
vulnerabilities,
making it difficult to establish a single
reliable source of truth for what is
actually secure.
So the way I compared this,
I don't remember where I said this,
but the way I explained this to somebody,
or I kind of summarized it is like,
if you're at home,
like let's say you just moved to a
brand new country, right?
Like not even a state,
you're in a totally unfamiliar place.
And all of a sudden,
a bunch of like salespeople come knocking
on your door, insurance salespeople.
And this one guy is like, hey,
you need flood insurance.
And the next guy is like, no, no,
no, no, no.
There's a lot of wildfires.
You need wildfire insurance.
And the next guy is like, no,
you need tornado insurance.
And the next guy is like, no,
you need earthquake insurance.
And you're like,
I don't know what insurance I need.
And so Cal.com was basically like,
I'm just not going to get any insurance.
I'm just going to stop answering the door
is basically what they did.
Yeah.
So I want to give them a little
bit of credit because I understand how
that could be frustrating when you've got
so many different companies and they're
all giving you conflicting results.
And it's like, well,
now I've only got so many people.
I've only got so many hours in the
day.
We can only fix so many things.
However, you know, discourse here,
they said,
I understand where they're coming from.
The industry is changing fast.
New AIs with capabilities are being
released every few weeks.
It's a scary world.
And I completely agree that open source
companies need to adapt.
I do not agree with the decision that
closing source is the solution.
and um you know they they basically had
two main points one of them was exactly
what jordan said like going closed source
is uh for anybody who's new here it's
what we like to call security through
obscurity and that basically means like
it's the code equivalent of hiding under
the bed right like if i hide under
the the sheets the monsters can't see me
and that's basically what it is and they
point out here in this this blog post
they say that um
Closed source has always been a weaker
defense than people want to admit.
A web application is not something you
shimp wants to keep hidden.
Large parts of it are delivered straight
into the user's browser on every request.
Things like JavaScript, API contracts,
client-side flows, validation logic,
and feature behavior.
Attackers can inspect all of that.
And then there was another spot.
Did I already pass it?
Oh,
those same AI systems don't actually need
your source code to find vulnerabilities.
They work against compiled binaries and
black box APIs.
I will admit that I do not know...
a lot about technical stuff and code but
i do know that i see a lot
of um
I see a lot of people reverse engineering
apps, right?
Proprietary apps.
And they decompile it and they find ways
to get in there and go, oh,
look at what this app is doing.
Look at all the calls home it's making.
Look at the fact that the traffic is
not encrypted.
What's this server it's contacting?
So clearly this is not,
like the blog post says,
it doesn't need to be open source.
People can find ways into this stuff and
they've been doing it for years.
And so that doesn't actually stop
anything.
It's just security through obscurity.
And it's...
I think security through obscurity can be
part of a larger defense.
I don't know about in this case,
but in general,
I think there's times when it can be
like a data removal, right?
If you pay for a data removal service,
like Easy Opt-outs is one that we
recommend on the website.
That's a good start.
But also like...
using a PO box whenever you're able to,
like not putting your address in every
single form online.
Like, you know,
it's part of a larger defense.
I wouldn't rely on that by itself.
And so the other point they made,
this is a very, very long post.
They said that, yeah,
Basically, they think that this is a...
The security argument is a convenient
frame for decisions that are actually
about something else.
So one is, you know,
Jordan mentioned that competitors can read
your architecture and your product
thinking.
And then there's governance.
They said open source communities push
back.
They file issues about decisions they
don't like.
They fork.
It's exhausting to manage.
I mean, fair.
I will be the first to admit that
every once in a while,
I do get burned out on the community
and I need a break.
But I don't know if that's a good
enough reason to close source your code.
So yeah, it's...
And just to go back to the it's
competitors thing,
Jordan pointed this out too.
There are a lot of companies that are
open source and they're thriving.
Look at Bitwarden, for example.
I mean, granted, they do have investors,
but they're still open source.
You can self-host Bitwarden.
They have instructions on how to self-host
Bitwarden.
There clearly is a way to do both.
And I do wonder if...
cal.com explored any of those options uh
it does sound kind of like there was
just a lot of investor pressure and this
was just the easy button right like if
we go closed source that's going to make
it harder for people to self-host they're
going to have to pay for us we'll
slap a bunch of scary warnings on our
diy page which yeah that's that's not cool
and actually to make that even worse if
i can go back to their blog they
did say that um where did it go
here um
God dang it.
Okay, yes.
While our production code base has
significantly diverged,
including major rewrites of core systems
like authentication of data handling,
we want to ensure that there is still
a truly open version.
So basically,
the cal.diy version is completely
different from the cal.com version
Which raises a lot of questions for me.
And they also make it sound like,
I don't know if they're actually doing
this,
but they kind of almost made it sound
like, here's Cal.DIY.
We'll update it if we feel like it
every once in a blue moon.
But otherwise, like, we don't care.
This is just kind of shut up the
purists, which, again,
is a really crappy take from a community
you claim to have...
valued and whatever but yeah so um this
is a really long blog from discourse but
it is worth a read and again i
i really applaud that like they pulled no
punches but it also wasn't uh you know
just like a like oh it's a pr
opportunity like here's our facts here's
our our experience our reasoning um so i
really give them a lot of credit for
that one but yeah that was a
It's such a wild story, and it's so...
I hate to assume malice in a company,
but yeah, it's so... What turned me off,
I think, was just the fact that, again,
that's all it was,
was we're just going to go closed source,
and that's going to fix all our problems.
And almost immediately,
I saw everybody was just kind of like,
is it though?
Is that really what this is about?
And it just kind of...
I think that's going to do a lot
more damage than if they had just admitted
like, hey,
this business model isn't working for us
and we're going to try something else.
I think they might end up losing a
lot more customers because of the way they
handle this.
I don't know.
Do you have any additional thoughts to the
discourse response or anything?
Yeah,
I just think trying to pass this off
as selling for security reasons,
I think is
to people that actually follow and
understand security is just laughable.
And I think unfortunately those people are
in a lot of cases,
they're going to be the people that self
host this software.
So they're going to be the ones that
realize you're being kind of crappy about
it.
Right.
Um,
I think they should have been a bit
more clear about the reasoning because,
you know,
we don't know if there's another reason
why,
like we were talking about with the VC
investors.
But I think, you know,
especially when we have like, you know,
so many ways to analyze software um that's
closed source even um so you know people
can do like fuzzing they can feed programs
a bunch of random data to get it
to fail they can um do binary analysis
so you can inspect memory dumps of like
applications when they they run uh like
reverse engineering stuff so you know i
think it's
kind of a little bit, uh, it's, it's,
it's feels a bit disingenuous.
That's the word I was looking for.
Thank you.
Um, but yeah,
like we see this a lot,
like even the opposite way around,
like there's,
there's malware that we see and, you know,
we're able to analyze that malware,
stuff like that, um,
to see what it's doing and to understand
what
what the code might be.
So anyway,
I don't think them switching to closed
source is going to make it any,
I mean,
surely maybe a little bit possibly to
these,
to these basic AI vulnerability things,
but I don't think it's a good enough
reason to switch this because yeah,
I think it's, yeah,
it just feels really not great when
they're trying to make up a reason that
doesn't really exist.
Yeah, and I mean,
something that just popped into my head
is, you know, one of,
there's a lot of reasons you might make
something open source or even source
available, like you mentioned.
But one of the reasons I think is
that
it increases the chance that somebody
could find a vulnerability, right?
I want to make it clear real quick
that open source does not automatically
mean that something is more secure or more
private.
It just means that the opportunity is
there.
And I think there is a certain critical
mass where when we're talking about these
bigger projects like Bitwarden or maybe
Proton or some of these,
because I know Proton,
some parts of them are open source,
some parts aren't.
But you know what I mean?
When we're talking about big projects like
that, then...
I think odds are it probably is more
secure just because they're a big project
and they've got a lot of eyes on
them.
But especially for some of these smaller,
like mid-level projects,
I don't know how true that necessarily is.
It's probably not true,
but the opportunity exists.
And where I'm going with that is I
think, especially in this community, um,
There's such a dislike for AI.
I think they're almost going to reverse.
They're almost shooting themselves in the
foot.
If this really were about security,
they're kind of shooting themselves in the
foot because the bad guys are still going
to use AI.
They don't care.
They're going to use any advantage they
can to get ahead.
They don't play by any rules.
The good guys,
not all of them will be using AI.
Right?
And they're playing by a different set of
rules.
So you almost need to like...
Like we've been said a million times now,
the bad guys are going to find the
vulnerabilities no matter what,
whether it's open source or not.
By making it closed source,
the only people you're stopping are the
good guys who are not using AI.
So yeah, that's... I don't know.
That just kind of popped into my head.
Yeah, I think...
I'm not sure if I a hundred percent
agree on the privacy and security aspect.
I think it's more like a transparency
thing, which I mean...
is good uh for like trust and like
stuff like that but like uh i mean
i i think there's definitely there could
be closed source software that's just as
private as some open source software or
they could be closed source software
that's just as private as
open source software.
So, you know, it's, I don't know.
Uh,
it just seems like a really silly reason
to me, but I think we,
obviously we're going to,
we're going to push for transparency.
Like transparency is important, um,
rather than kind of a black box,
which we have to work out things
ourselves.
Um, so yeah.
Agreed.
Um,
Alrighty.
Well, before we dive into,
we have a story coming up about, uh,
Well,
some updates to age verification or
identity verification,
let's put it that way.
But first,
we're going to pause and talk about some
updates with what we've been working on at
Privacy Guides this week.
So in the video department,
we're really excited.
Bit of a soft announcement here.
On Sunday,
we're going to release an interview with
Carissa Veiles.
And if you guys don't know who that
is, you definitely should look her up.
You're missing out.
She wrote this awesome book called Privacy
is Power.
I'll grab it in a minute,
but I actually have it on my bookshelf
back there.
And it's honestly,
like I could gush about this book because
it is so accessible.
You know, it's so like,
I don't want to take drive-bys at other
authors,
but some other authors have written some
very seminal works in the space that were
very academic and kind of hard to read
and pretty dense.
And Carissa Vales is, I mean,
she's a professor of ethics at Oxford
university.
So she is very academic as a person,
but her writing is so like plain English
and down to earth.
Like I could give this book to anybody
and maybe they wouldn't read it because
it's not their cup of tea,
but they absolutely could read it because
it is written so plain English and it's
Um,
but in still like full of useful
information.
So yeah, I, as you can tell,
I'm a huge fan, but, uh,
we got to interview her and we talked
about her focuses on AI and ethics and,
you know,
what is AI going to do for the
future of, of our society?
Uh,
we did talk about privacy a little bit.
Um, I mean, it was a,
it was a great conversation.
I, uh, again, not to like
fanboy too much,
but I was telling people like,
I felt like I was smarter just for
having been in the same figurative room as
her.
Um,
unfortunately this was a remote interview,
not an in-person one, but, um, yeah,
so that's going to be out on Sunday.
She's absolutely awesome.
Go read privacy is power.
If you haven't, uh,
I've already pre-ordered her new book and
you will get a taste of that on
Sunday.
So definitely subscribe on YouTube or peer
tube.
And we'll be posting that when we come
out or when it comes out,
I can't talk tonight.
Yeah, no,
I'm really excited for the interview to
get released.
I've been working on like the editing side
of things.
Oh, there it is.
There's the book.
It's kind of a very recognizable cover as
well.
But yeah,
I definitely am a fan as well.
I think, yeah.
And Nate asked some,
some really good questions in the
interview about a lot of things that she
hasn't talked about publicly,
I would say as much.
And a lot of stuff that was in
the book itself.
So it's like a teaser,
like she's going to talk about some of
the stuff in the book and, you know,
I think it's interesting, yeah.
So she's got a new book coming out
called Prophecy,
which is about AI prediction stuff.
So, yeah, that's also pretty interesting.
So that could be interesting to check out
as well.
I think it's on pre-order until April XIX
or XXI, XXI, April XXI.
So it is looking quite interesting for
that.
But this week we also had some privacy
guides news posts.
So we had,
looks like we had a couple from Freya
and also a couple from Nate as well.
So Nate did one on HackerOne pausing its
internet bug bounty.
So they also kind of were saying that
they were having an issue with AI bug
reports, which that's another problem,
I think.
There was a data breach roundup from Nate
as well.
which I think is important to keep on
top of, just scan the list.
Just check it out and scan the list
because you never know what you might be
caught up in.
And I think companies are getting to a
point where they are
being a bit more accountable where
they're, you know,
sending out notices to people,
but it's also good to keep on top
of that.
And there was also some articles here from
Fria, like Nate talked about earlier,
there's Mastodon getting end-to-end
encryption, private messages.
So Fria had an article about that.
Fiverr exposing information of its users
publicly on Google search results.
Oh my goodness.
It's horrible.
India dropping proposals to require
biometric ID app after strong opposition.
So yeah,
there's a lot of interesting things going
on in India regarding that.
And there was also some stuff about Google
Chrome adding protection against cookie
stealing malware.
But yeah, kind of interesting,
interesting week.
So definitely check out the
privacyguides.org slash news section.
I guess with that being said,
all this is made possible by our
supporters.
And you can sign up for a membership
or donate to privacyguides.org.
or you can pick up some swag at
shop.privacyguides.org.
Privacy Guides is a nonprofit which
researches and shares privacy-related
information and facilitates a community on
our forum and matrix where people can ask
questions and get advice about staying
private online and preserving their
digital rights.
And yep, if you want to do that,
you can visit privacyguides.org and press
the red heart icon in the top right-hand
corner.
of the website.
You'll also be able to sign up for
a membership and get sweet perks as well.
But now let's talk about the future of
warrantless surveillance in the U S Nate.
Yeah.
All right.
As the, uh, as the American,
I guess I get to talk about this
fun little topic and, uh,
that is section seven Oh two, which, um,
many of you may not be super familiar
with.
I, for the record, um,
I follow many different news sources.
The other news source that came up in
my feed was TechCrunch that covered this
story.
I know I just want to throw it
out there.
I know this headline is obviously has a
certain political leaning to it,
but it had a lot more detail in
it as well.
So that's why I went with this one.
Definitely a lot more detailed than
TechCrunch is like five paragraphs.
But anyways,
so for those of you who don't know,
here in the US,
we have the infamous NSA,
the National Security Agency.
I think for some reason,
my brain just blanked.
I know they used to jokingly call it
the no such agency because up until the
nineties,
they didn't even acknowledge it existed,
but it does exist.
And they have so many different things.
One of them is called the foreign
intelligence surveillance act,
which basically authorizes them to spy on,
um,
communications that go in and out of the
country.
And they play really fast and loose with
that specifically at section seven Oh two,
which if I remember correctly, um,
John Oliver did a piece way back in
twenty thirteen where he talked about this
and he went to Russia and interviewed
Edward Snowden.
Super funny.
I highly recommend it still holds up.
Um,
But the way he described or the way
he read Section seven oh two is it
allows for the collection of, quote,
any tangible thing, unquote,
related to like national security and like
communications,
which he points out is like so incredibly
broad,
like telling your teenager you can only
use the car for like car related
activities.
So it's like, OK, hit and run,
drinking and driving like these are all
car like street racing.
These are all car related activities,
my dude.
So yeah.
pretty broad stuff and the government has
done so accordingly.
And so section seven Oh two has been
very controversial on both sides of the
aisle.
Uh,
there have been politicians from both
political parties who have said like, Hey,
we need to reign this in at least
publicly have said we need to reign this
in because for well over a decade now
we have failed to do that,
but that might be changing might be
because, um,
Section seven Oh two is one of those
things that has to be renewed
periodically.
And around midnight,
I don't know why he did that,
but for whatever reason,
the speaker of the house,
which is basically the guy running the
house of representatives,
the head representative,
he convened a vote on,
I guess this was last Friday.
So this would have been after we recorded
the podcast last week and called in
lawmakers to vote on extending section
seven Oh two.
And it failed.
by, I believe, where did it go?
They said about a dozen votes.
And for those of you who are not
keeping up with the US right now,
first of all, I very much envy you.
But our government is incredibly divided,
potentially the most divided it's ever
been.
I don't know if that's actually true,
but everything is very partisan right now.
That is not me being snarky.
That is just true.
Everything is very partisan right now.
And on top of it, the...
what's the word I'm looking for?
The margin of control, like the ratio of,
because we have a two-party system in the
US, which is probably our first mistake.
Our ratio of like one party to the
other is like razor thin.
So everything is very contentious.
Right now, the Republicans,
which is our conservative party,
they have a slight majority,
but it would not take a lot of
votes to flip things.
And that matters because about a dozen
Republicans voted against renewing this
thing.
And that was enough to not pass it.
And they tried again anyways.
They were like, hey,
let's do another vote.
Like the same night, they were like,
let's do another vote.
And then the number went up to like
twenty.
And I think that's when the Speaker of
the House was like, oh,
we should probably stop because I'm losing
support.
So they stopped.
They did manage to pass.
Sorry, I did a control F here.
They did manage to pass a ten day
extension.
So
Previously,
it would have run out on Tuesday.
Now it's going to go basically until the
end of the month.
But even then,
it's still – the US is so weird.
It says later on here that – yeah,
right here.
The Foreign Intelligence Surveillance
Court quietly recertified the program in a
classified ruling on March
I don't know how that works.
Jonah commented on Mastodon.
He doesn't know how that works either,
and we're both natural-born American
citizens as far as I know.
We're a very confusing country.
But I think this is exciting news because
it already failed to pass twice,
and I have to assume that if it
just full-on does not pass,
like if they cannot get this thing passed
through by the end of the month –
then it's got a deadline.
And I don't know what's going to happen
when March,
twenty twenty seven rolls around since
apparently Pfizer can just decide to keep
doing it.
But I don't know.
I think to me,
I'm hopeful because this represents the
first time in like.
over twenty years i think that we might
actually have a shot of getting this thing
defeated um but that's where we're at
right now those are kind of the facts
is uh it failed to vote twice it's
got an extension until the end of the
month um it really needs to i mean
no matter where you are on the spectrum
i know i'm probably mostly talking to
people who are like good this thing should
die but i i also recognize there's some
people who are like well you know
there does need to be some stuff for
national security, right?
But this thing has been repeatedly abused
for warrantless surveillance.
Like again,
the government is not supposed to collect
data on American citizens and it finds all
kinds of loopholes to do it anyways.
This is actually the thing that like,
they use this to buy location data from
third parties.
And I think that's one of the,
it's funny is like the Democrats didn't
even want to completely kill this thing.
They just wanted to reform it.
They're like, require a warrant,
stop buying data.
And the Republicans were like no.
So I'm glad to see at least some
Republicans agreed with this.
So I think the one thing I wanted
to add is where did it go?
There was – basically they did – the
Republicans did try to introduce some
quote-unquote reforms,
which were already existing things.
Like where did it go here?
Yeah.
So the amendment contained a provision
that was in essence a fake warrant
requirement.
It would have prohibited government
officers from intentionally targeting
Americans' communication without a
warrant, which is already in the statute.
It also offered the government a warrant
path if agents had probable cause to
suspect the subject is an agent of a
foreign power,
an authority that already exists.
So basically they just wanted to reiterate
things that were already in there without
actually doing anything meaningful to rein
it in.
And just to drive home the point,
ready to go.
The FBI has used Section seven oh two
to run warrantless queries on a U.S.
senator,
nineteen thousand donors to a
congressional campaign,
Black Lives Matter protesters and both
sides of the January six capital attack.
So.
I don't know what to tell you.
To me,
this is pretty obviously unconstitutional
and needs to be reigned at very least
needs to be reined in regardless of where
your political leaning is.
But it might just die altogether.
And I don't know.
I guess we'll see what happens if it
doesn't pass in March of next year rolls
around.
But yeah,
I think that's all of that story.
Did I did I miss anything, Jordan?
I guess I just have questions that maybe
people in the audience might also have.
Yeah, go for it.
I'm not a lawyer,
but I'll do my best.
Yeah.
So like,
I guess my question would be like,
I thought that you did need a warrant
to surveil people.
Is this like a specific special case that
people have to use like specifically or.
Yeah.
So section seven Oh two authorizes
warrantless surveillance on non-Americans.
And I know we've talked about this briefly
in the past in relation to other stories.
The loophole is that like the,
When I text you, for example, I mean,
we use Signal,
so they can't see it anyways.
But when I text you,
since you're Australian...
our communication crosses international
borders and that's the justification the
NSA uses to scoop up that surveillance or
to scoop up that communication and say,
we get to collect this.
And in theory,
they're probably supposed to throw away
like my side of the conversation or
something, but it, you know,
it doesn't stop them from basically spying
on me without a warrant to be just
because I'm talking to you,
even though they might not have a reason
to suspect anything.
They just,
it crosses international borders.
So yeah.
Yeah.
So wait, okay.
So I,
but they wouldn't be doing that to
everybody automatically, right?
Like it'd have to be like,
if I was on a watch list,
maybe they would consider doing that,
right?
Like, no.
As far as I know,
it's a like carte blanche across the
board.
They do not need a warrant to spy
on any non-American citizen.
Wow, okay.
Yeah, which is kind of,
it is very horrifying.
And it's also kind of crazy to me
that, you know,
when I think about like the US landscape,
like conservatives are so like,
and I don't even mean this as a
ding,
like conservatives are so like
pro-American, like Americans rights,
like I'm a US citizen.
I get all these wonderful freedoms and
rights.
Then why can't we agree on the basics
of like,
stop spying on your own citizens without a
warrant?
But for some reason,
apparently we can't even get that far.
So I don't know.
I mean,
I feel like spying on people that aren't
American citizens is also kind of
problematic too.
I mean, I agree,
but I'm trying to think of like the
bare minimum base floor that we could all
get to agree on.
But apparently, you know,
I guess the bar is in hell.
It's so low.
So I don't know.
I'm very cynical about this stuff.
So I guess another question that I have
is, like,
it seems like in this case it was,
like,
a lot of Republicans who were voting
against this to block this.
Is that normal?
Is this, like,
sort of somewhat of a bipartisan thing,
like,
wanting the NSA to surveil everyone or...?
Yeah, so our – again,
for foreign listeners,
I know the US doesn't truly have like
a left-wing party,
but our Republicans are our conservative
party and the Democrats are our more
liberal party.
I'll put it that way.
And everybody – again,
I hate to say it,
but it is true.
Here in America,
things are so partisan that people
typically vote along party lines.
And so the Republicans,
because they are more conservative,
they tend to be a lot more like
–
you know, we, we need to give the,
you know, the, I feel bad saying this,
but this is their logic.
And I swear to God,
I'm not like trying to ding anybody.
Um, they're very pro troops.
They're very pro police.
They're very pro,
like our intelligence community is
protecting us.
And so we need to give them all
the tools they can to protect us.
And I've literally seen,
I am still mad about this to this
day.
I literally saw there was a, uh,
An opportunity,
I don't remember how it got there,
but basically there was a moment where
somebody actually got a law all the way
up to,
or a bill all the way up to
our Congress that basically said like,
require, yeah,
require police to get a warrant instead of
buying data.
It was literally that.
And one of the Republicans who voted
against it literally said, he's like,
well, our enemies like China, for example,
they can start up a shell company or
they don't even just start up shell
company.
They can buy this data from any data
broker, right?
Just like we can.
So if we require our people to get
a warrant,
that puts us on unequal footing.
And I have never wanted to scream at
my screen so hard because I remember
thinking, I'm like,
then the solution here is to pass an
actual data privacy law so that nobody can
buy the freaking data.
But apparently that's just, I don't know,
that requires too many IQ points, I guess.
But anyways, personal opinion aside, like,
yeah, that's, it's...
Republicans generally tend to be a lot
more lenient on military and intelligence
and law enforcement and argue that we need
to give them as much help as they
can to do their jobs,
which includes putting as few restrictions
on them as possible.
So, yeah.
I see.
Okay.
Yeah,
I did mention in here was like the
House Freedom Caucus Republicans.
So I don't know if that sounds like
they might be like a libertarian type
people.
I'm not really sure.
Yeah,
I'm not super familiar with them either.
I saw that too.
It looks like I'd have to look more
into them.
Well,
it's good that they voted against it
anyway.
I think, you know,
if we can put aside all the other
partisan stuff and be like, you know,
privacy is an issue that's important.
Let's not surveil everybody and collect
all their information unnecessarily.
I think we should try and
against that which is uh unfortunate i'm
sorry this i'm sorry it's so partisan yeah
um because i think that definitely does
make things more difficult you know if
there's one party that's trying to get
something passed it's like we don't want
to do that because it's by those people
it's like uh that's not really the point
but it should be based on the merit
of what they're trying to pass not like
you know the party yeah it's
It's extremely frustrating because that's
exactly what's happening is like somebody
will put like this and, you know, like,
hey,
spying on people without a warrant is bad.
Well, I don't like you,
so I don't like your bill.
And it's like, dude, come on.
But I do want to point out on
that note,
there are some pretty big names in here
that I think are really telling.
Yeah.
Not to get too deep into politics,
but like Thomas Massey of Kentucky,
I'm going to assume he's a Republican
because Kentucky is a very deeply red
state.
Chip Roy of Texas is a Republican.
Lauren Bober,
who used to be like one of Trump's
biggest supporters.
I don't know if she still is.
He's kind of losing some of his key
supporters.
But I just point that out as like,
man,
these are big people that I would not
normally expect to like.
vote against the party line.
So that's probably more indicative of like
larger us politics,
but it's good to see that.
Like you said,
like there are some people who are just
like, no, this,
this is not a partisan issue.
We need to fix this.
So hopefully it won't pass and then we'll
see what happens.
Yeah, I think it is kind of frustrating.
But you know, I hope it doesn't.
I guess we're looking at that on Tuesday.
Oh, no, sorry, not Tuesday.
Sorry, at the end of the month.
So hopefully we get an update for that
in a next This Week in Privacy episode.
um but yeah i think we're trying to
stay tuned for updates definitely make
sure to subscribe and uh add this to
your podcast app um but i mean yeah
i think it's uh it's important we're
trying to stay um when we talk about
this sort of stuff you know we're just
talking about this from the privacy angle
um so you know we're not trying to
Because I know I personally don't talk
about US politics because I just feel like
I'm going to offend someone.
I'm going to always offend someone if I
say something.
So thanks for kind of explaining that
because...
I definitely have less experience.
I mean,
I know a bit about US politics because
it's kind of unavoidable.
So yeah,
but I think it's good to explain things.
But I guess moving on to this next
story here,
unless you have anything more to add.
Nope, that's all I got.
All right.
So this next story here is about the
EU age checking app.
So basically...
Yeah, we talk about this a lot,
you know, age verification stuff.
And now there's basically a movement in
the EU to keep kids safe online with
this new EU age checking app.
Quoting from the article here from
Politico,
the European Union's age verification app
is ready to be rolled out to protect
kids online.
The Bloc chief Ursula von der Leyen said
Wednesday, sorry if I messed up your name,
our European age verification app is
technically ready and will soon be
available for citizens to use,
the European Commission president said at
a press conference.
And basically, according to this article,
the app is a critical part of the
EU's plans
to keep children safe online.
The technology would allow people to prove
their age through the government approved
verified systems.
The EU said it has ensured it would
also protect citizens' privacy rights and
personal data.
Now,
I think that last sentence right there,
that remains to be seen because basically
every single age verification system we've
seen so far has been not great from
a privacy perspective.
And quoting the article again,
we're holding online platforms accountable
that do not protect enough of our kids,
maybe.
Might have been a misquote there.
The new age verification solution and the
enforcement of our rules go hand in hand.
so basically uh this app is ready to
be downloaded um and just kind of
highlighting a post here um someone on our
forum posted a link of their blog
basically going through sort of the uh the
new eu age verification app um so if
you haven't heard of them before privacy
dad they do sort of like parenting related
privacy stuff um and they've been
you know, posting, uh,
an update here about, uh,
the EU age verification app.
So you can kind of see what the
flow will look like.
Um, and apparently according to them,
they were able to download the APK and,
you know, test out the app.
A lot of the features aren't a hundred
percent ready and it was, you know,
has a testing mode,
which you can basically see how it would
work.
Um,
It does seem like you need to scan
your ID into this app.
So I mean, that's fine, I guess,
if you're sending it directly to the EU
government and there's no third party
company involved here.
But I guess that would be like a
separate governmental body that's been
established for this.
I'm not entirely sure about the whole
process behind this.
but you can kind of see the age
verification credential stuff.
Um,
so basically how it's meant to work is
you visit a website or an app and
you can use this, uh,
use this app to basically prove that
you're over eighteen.
It doesn't share your age,
it just shares the proof basically.
Um,
But it does look like you need to
take a photo of your identity document and
record a video of yourself.
So that's not great from a biometric
standpoint.
So yeah, that kind of sucks.
There's a lot of different things here.
So basically it's just a move to basically
change the...
Oh,
there's someone in the chat who asked a
question here.
Probably outside the stream subjects,
but I noticed that hosts are using Apple
products.
Is there a privacy related reason or just
personal preference of hardware?
Personally,
I'm not going to talk about personally,
but I'm just going to say for work,
this is, you know,
I need to use DaVinci Resolve.
I need to use...
applications that aren't available on
Linux, which I would love to use Linux.
I think it'd be great if I could
use Linux.
But as far as I'm aware,
there's still a lot to go on DaVinci
Resolve.
It's quite annoying to use on Linux.
It's missing some stuff.
It is less stable.
It's less supported.
I use Affinity for all the graphic design
stuff we do here at Privacy Guides.
And
As far as I'm aware,
that is also quite finicky.
Generally,
I want to be focusing less on the
technical issues,
like having a bug happen in DaVinci
Resolve where like I can't render a video
or like something like that,
less if possible.
So, you know, I think
you kind of have to use what you
have to.
I mean,
I don't have any personal information on
this computer.
It's like a work computer.
So I'm not really that bothered by using
an Apple product to do this.
Um,
So I think you just have to
compartmentalize things.
But sorry, I kind of got off track.
I just wanted to quickly answer that
question because I guess Nate has got a
MacBook and I'm using an Apple avatar.
So I guess that was kind of a
question that needed to be answered.
But yeah,
do you have any thoughts on that or
on the EU age verification stuff?
Um, well,
I guess let me start with the question.
Um,
so I am fortunate enough to have one
of each computer and this is also a
work computer.
This was,
this was given to me by privacy guides
because my windows computer is from,
I think.
So in tech years,
it's starting to get up there.
I've had a couple of close calls with
it already.
And so this was kind of like, Hey,
We should get me a MacBook just in
case the day comes when my Windows
computer doesn't boot and I'm not
completely up a creek.
So this is kind of my backup,
my computer.
But then also like my Windows computer is
like I've got all the cables dressed in
and it's really nice.
So it's like, cool,
the Windows computer can stay there.
And then I'll use this one when I
travel or for the podcast or something.
Um,
cause I probably should do more work at
a standing desk, but I don't cause my,
my actual desk has like three screens and
well,
two plus the laptop and I have studio
monitors and stuff.
So yeah.
Um,
I try to use Linux more for the
actual, um,
like basically anything that doesn't
involve editing or gaming.
I, um,
honestly,
I prefer windows just out of habit just
because I'm so used to it.
And also, again,
I do some gaming and windows generally
handles gaming better than Mac.
Um,
I've heard gaming's come a really long way
on Linux.
I know Nick from the Linux experiment, uh,
edits on DaVinci, but I also,
I was an audio guy for like,
I was a professional audio guy for years
before I, I took this job.
So, um, I, uh,
I have amassed a collection of plugins and
workflow that are very specific to
Windows.
So even if I moved over to Linux
for DaVinci,
there's a really good chance that a lot
of the plugins I rely on would not
move with me.
Yeah, I don't know.
But I...
Yeah.
I mean, that's kind of my workflow.
I, I use it largely for, um, production.
I use windows for production and gaming.
I use the plugins,
which is why I'm still on windows.
And also I use cubes,
which is that's never going to do
production or gaming to begin with.
Um,
not unless somebody wants to donate like
a, a thousand dollar computer.
That's just super souped up and I can
make GPU pass through work reliably.
which I've heard doesn't always, so yeah.
And also I hate to say it,
but like, so when I got this computer,
I used it as my main computer for
like a week or two just to,
and I edited like three or four videos
just to make sure like this will do
what we need it to.
This is an acceptable backup.
And I, it's weird.
Cause in college I had a Mac and
it was fine.
You know,
like I remember when I switched back to
windows, I was like,
which I did mostly because it was cheaper.
Right.
Like when my Mac died, I was like,
yeah, I'll just go back to windows.
And I remember thinking like,
I don't understand why people are so mad.
Like you can switch between them.
They're fine.
They're easy.
But for some reason,
when I was using this recently,
I was just like,
these keys are driving me crazy and I
hate it.
And like, even now I'd like,
occasionally I put things in the wrong
place or I like,
apparently there's this thing where if you
tap too hard, it like,
does something different and i don't know
if i'm making sense but i i tap
things really hard and it does not work
well so yeah it's just the workflow is
i i could get used to it if
i had to but it's definitely i was
just kind of like you know what it
works i'm going back to windows to be
honest so i don't know um macs are
definitely much more private and secure i
would argue and certainly a lot less
annoying with the ai um this thing did
not come with apple intelligence enabled
and uh you know apple intelligence is also
probably more useful than copilot i would
imagine haven't used either but i would
imagine so i don't know um yeah this
it's not okay well either way um yeah
i mean it's it's uh it's it's not
my daily driver um and i don't even
mind using linux it's just it's it's a
work computer mostly and it just happens
to fit my workflow so anyways um yeah
going back to the the eu story so
Um, yeah, I mean,
I think we just wanted to share this
because it's a bit of an update to
all this age verification stuff.
The way I understand it is that, um,
this is an app that can be used
as is,
but it's also designed to function as a
framework for other companies to build on
top of.
I could be wrong,
but this is how I understand it is
basically it's like, it's almost like, um,
a lot of you guys might remember during
COVID, um,
Apple and Google released like a built-in
contact tracing thing.
And that way other states could build on
top of that.
And it was kind of like, look,
here's a relatively private and secure,
certainly more so than whatever crap your
underpaid IT guys are going to cook up
in the ten minutes you give them.
like, it's kind of like,
here's a framework to start with.
So you can at least start off on
a good foot and build from there.
And I feel like that's kind of what
this is,
is the same thing is where it's like,
you could use this as is,
but you could also like roll your own
local version.
Um, if I understand it correctly,
I could be wrong,
but I feel like I saw some people
saying that.
Um,
the last thing I do want to know
real quick,
I want to pull this up is, uh,
For the record,
I don't know who this person is.
I don't know their credentials,
and I haven't seen a whole lot of
people verifying this,
but I also haven't seen a whole lot
of people contesting this.
But this claims to be a security
consultant who said that they found
potential vulnerabilities in the EU's age
verification app in under two minutes.
So one of them is that I guess
you can delete the PIN.
Like, there's a way, yeah,
the attacker can simply remove the pin
values from the file and restart the app.
After choosing a different pin,
the app presents the credentials created
under the old profile and lets the
attacker present them as valid.
And I think they said there were some
others.
But I guess all that to say is,
like, if you don't have to use it,
I mean, obviously,
we don't think you should use this kind
of stuff in the first place, right?
Like,
we are very anti-age verification people.
Jordan made an excellent point earlier
when they said that, like,
this may have been before we were live,
but I think we were live.
But Jordan pointed out that, like,
you know, parental controls exist.
They're already there.
They're already fine.
So, but...
Yeah,
if you're – I mean we're not telling
you to break the law,
but if you're in an area where this
is not required yet,
definitely I would not advocate for
downloading it because it seems like there
might potentially be vulnerabilities.
So I would wait for more people to
do some research and kind of look into
this and –
Hopefully they'll fix these
vulnerabilities.
Cause I mean, it's,
it's like the very least they can do,
right?
If they're going to be like,
everybody has to give us our ID.
Like the very least they could do is
actually secure it in a way where you
can't just like delete the pin and restart
the app.
Like that's completely insane if true.
So I don't know.
I think actually we're, yeah, we're,
we're going to talk a little bit more
about age verification here,
here in the U S so I want
to point out,
this is a brand new hot off the
presses story.
And as a result,
it's probably sitting in my RSS feed as
I say this,
but I have not seen any of our
usual,
more reputable outlets cover the story.
So unfortunately I had to go with a
press release from a Congressman from New
Jersey, Josh Gothamire, Gotham here.
I don't know, but yeah,
no offense to him,
but I'm just saying like,
this is a press release.
It's going to be a little bit,
what's what I'm looking for polished in
overly optimistic and maybe not the most
balanced piece out there.
So take this with a grain of salt,
but apparently the U S has introduced a
bipartisan parents decide act to protect
kids online.
And, uh, this is basically the, uh,
the operating system level age
verification, which again,
I keep calling it age verification.
I should be calling it identity
verification because it will require
everyone to do it.
Not just kids.
Um,
But yeah,
it will require operating system
developers such as Apple and Google to
verify users' ages when setting up a new
device rather than relying on
self-reported ages,
allows parents to set appropriate content
controls from the start,
ensure that age and parental settings
securely flow to apps and AI platforms,
and prevent children from accessing
harmful or explicit content by creating
consistent,
trusted standards across platforms.
I...
I feel especially cynical about age
verification in the US.
I will admit I got this from another
video.
This is not an original thought,
but it's a good thought.
First of all,
we don't even have a national privacy law.
Nothing.
Nothing at all.
So do whatever you want with this data.
I think just last week we covered a
story.
It was either last week or the week
before.
We covered how the governor of...
wisconsin i think it was vetoed a state
level identity verification law because
he's like we don't have um he's like
this this thing doesn't have any
protections against like selling the data
or securing the data like there's none of
that and and that's true at a national
level so first of all there's that um
I will forever remain cynical that schools
have data breaches left and right,
and nobody seems to care,
but somehow encryption and you know,
all this is what's putting the kids at
risk.
Not the fact that the LAP or not
LAPD,
but the LA school district just leaked the
date of birth,
email address and home address of every
child in the city.
No, it's, this is the problem here.
I'm being very sarcastic in case you can't
tell.
And, um,
There's also the lawsuit just the other
week where Meta and Google got legally
found to have addictive algorithms.
And I think-
again, not an original thought,
but I like this thought that I've been
attached to lately is the idea of like,
it's so ridiculous that we're saying that
this is only bad for kids.
But once, once you're an adult, it's fine.
Like you,
you can go ahead and let these companies
abuse you and just mistreat you and use
your data,
but you have to be a certain age.
It's just, I don't know.
It's,
I think we're regulating the wrong thing.
And I think, um,
Jordan and I had this discussion recently
too,
where in a lot of other countries and
maybe here in the US,
sometimes here in the US,
when you set up a new device,
it prompts you like,
is this for a child?
And if you click yes,
it will tell you about all of the
potential parental controls that exist.
And I really think that's a much better
way to go.
Like, I like parts of this, right?
Like,
allow parents to set age-appropriate
content controls.
I don't know who's not allowing parents to
do that, but let's pretend.
Ensure the age and parental control
settings securely flow to the apps and AI
platforms.
You know, like,
I think those are good things, of course.
But I don't understand why we can't start
there.
Like,
why don't we start by empowering the
parents to know that these controls exist?
Because, again, you know, like...
I'm so tired of talking about age
verification, but you know,
this whole like prevent children from
accessing harmful or explicit content.
Okay.
What about classical artwork?
Right?
Like that's a class, a common example,
like ninety percent of these classical era
Da Vinci's and whatever,
like they're both men and women are
partially or fully naked.
So like it,
does that count as explicit content or is
that like valid because the artwork,
you know, I just watched them.
Obviously this is not a one-to-one,
but I just watched the sci-fi movie the
other week called any aura that is like
soul crushingly depressing,
but it had artistic value.
Like, yeah, it was a really sad,
depressing movie,
but it had an artistic merit to it.
It wasn't just like,
I'm going to go watch something depressing
for the sake of it.
So it's, it's very like
I use that as an example.
It's just very like, I don't know.
I don't know what I'm trying to say.
It's getting late.
But it's so depressing that we have no
protection for data in the first place.
And now we want to pass this national
law that says turn over your ID when
states can't even agree what counts as
harmful content.
And, um, yeah, you know, Swiss,
Swiss kill said here,
like whose responsibility is it to raise
their child?
To be honest,
it's not even responsibilities for me.
It's like, right.
Like to me,
it feels so taking away the agency from
the parents to say like, okay,
the government's going to tell you what
your kids can look at now.
Like, that's really what we need to start,
how we need to start wording this because
I guarantee you parents on both sides of
the aisle are not going to be cool
with that.
And it's just, it's, uh, yeah,
it's so frustrating to me.
I don't like
I don't – nobody is saying the internet
is perfect,
but I think most of us can agree
that this is not the way to solve
it.
So I don't know.
I feel like I'm just going to keep
going in circles if I keep talking,
but yeah.
Yeah.
Yeah,
I think they should rename it to
Corporations Decide Act because a lot of
times, you know,
like a lot of these things that this
Gottheimer,
Josh Gottheimer guy is announcing in this
press release are like, you know,
require operating system developers like
Apple and Google to verify users' ages
when setting up a new device rather than
relying on self-reported ages.
Um, that's fine, I guess.
I mean,
but that's also all that information is
going to be throughout flowing through
Google and Apple.
Is that really what we want?
All of this personal information,
like flowing through big tech corporations
who, you know,
we know Apple and Google are not,
they don't,
they don't have a respect for our
information.
So, um, you know,
I don't think that's a great idea,
but it's also just, you know,
These app stores, like it says in here,
allow parents to set age-appropriate
content controls from the start,
including limiting access to social media
apps and AI platforms.
So a lot of times that's going to
be done through an app store.
And like we saw with the app store,
I believe it's called the App Store
Accountability Act.
Am I correct in that?
Okay.
I think so.
If we're thinking of the same one, yeah.
Right.
Yeah.
And that one was also trying to be
passed in the U S and I think
it's, this is like almost a similar thing.
Like it's,
it's kind of pushing this onto the app
store, which we've talked about before,
but like,
and Nate mentioned a little bit there, um,
like, you know,
how do we know what they consider is
mature or like,
how do we know what they're choosing to
take down and not allow people to access
is, um,
age appropriate like who decides that um
so that's another another slope of things
um I think you know there's definitely
easier ways to do this than having to
do such aggressive measures um but I think
it kind of does take the agency away
from parents a little bit because like I
think you know it's definitely a thing
where
parents have very different ways of
raising their children, right?
Like some people will do something a
certain way and some people will be the
complete opposite of that.
So I think, you know,
forcing people to do things a specific way
and to have access to certain stuff is
interesting.
I think there's different ways of doing
that from a parenting perspective.
Um, so I dunno,
I think a lot of times though,
you know, maybe we shouldn't be giving,
I mean,
this is completely a personal opinion,
but maybe we shouldn't be giving children,
you know,
devices that can just access the entire
internet.
Because I know when I was like younger,
uh, having access to the, to the internet,
unrestricted access to the internet was
probably not the greatest thing for my
development.
Right.
And I'm sure many people who are like,
you know,
iPad kids or like Gen Z type people
might also like share the same thing.
Like basically having answers to any
question and, you know,
access to anything at any point is not
a great thing in some cases.
So, you know, I think that's,
that might be something that needs to be
tackled from a different angle from like
parents or,
parental controls um but I don't think
it's I don't know I don't think this
should be up to the government to decide
um so and it doesn't really seem like
it respects people's privacy anyway so
yeah I don't really have any more to
add here
Yeah.
I don't, I don't think I do either.
It's just, um,
I think the thing I'll end with is
if you're in the U S uh,
definitely contact your representatives.
I certainly will be, um,
this coming week and, uh, you know,
try to outline, I would argue,
try to outline why you're against this.
Um,
I don't know if that will increase your
odds,
but I feel like it would be a
lot more effective instead of just be
like, Hey, I'm against this thing.
Be like,
I'm against this thing because it takes
away agency from the parents.
It,
there's no meaningful protection of the
data, uh, you know,
all these kinds of like,
maybe we'll get lucky.
And maybe some of these politicians will
read, I mean, obviously they won't,
their aides will read this,
but maybe some of their,
their assistants will read some of these
responses and just be like, Oh,
you know what?
These are like legitimate concerns.
And,
and I think also spreading awareness
around us.
Like,
I know I'm always the first one to
be like, Hey, contact your politicians,
but, um,
I really think telling the parents around
you,
this takes away your agency as a parent.
What happens when there's a data breach
and your ID gets leaked?
I think those are things that will get
their attention and get them to sit up
and realize, oh, yeah,
maybe this isn't the best way to go
about this.
Because a lot of people really don't see
what the issue is, right?
There's all these false equivalencies,
like, oh,
you have to show ID to go into
a bar.
But it's just...
Yeah.
So and also real quick,
Swiss Kill said here is, you know,
it's more effective than stop that.
I also want to point out, like.
Be nice to people,
because if you just send them an angry
message about like you're an idiot and
this is the dumbest law ever,
like they're just going to put you on
the block list.
Well,
I don't think legally they can block you,
but they're just going to ignore you.
So, yeah,
my mom used to say you catch more
flies with honey than vinegar.
So, yeah, I don't know.
Could you maybe offer some...
How exactly can you get in contact with
this person in particular?
That's a good question.
Hold on.
Let me look it up here because I
did...
I'm not going to show my own blog,
but I did write this really long opinion
piece on my own blog.
And I did include...
Um, so congress.gov, house.gov,
senate.gov are all websites you can use to
find your state level politicians in the U
S which you probably want those people
right now because, um,
this is a national law.
There's also common cause.org and usa.gov
are some additional websites to help you
figure out who are your representatives,
which honestly, if you just web search,
like who are my political representatives,
usually several websites will pop up and
you will, um,
In case anyone is not aware,
you will have to put in your address
because that's what determines what
districts you fall in and stuff.
But yeah, I don't know.
To me, it's worth it.
Yeah.
And, and real quick, Canada said,
I was under the impression that Google and
Apple oppose age verification.
They all do,
which I think should be extremely telling
that like meta doesn't want to do this.
Open AI is one of the companies that's
been lobbying these groups behind the
scenes.
I forget where that came from recently,
but yeah,
like Google and Apple have openly pushed
back against the app store accountability
act.
Like nobody wants to be responsible for
this data.
which to me is extremely telling.
Like the one time that all these companies
that are just built on violating your
privacy, monetizing your data,
collecting every... Like meta...
built an app that purposely opened up
ports that it doesn't normally open up to
get around the sandboxing built into the
phone.
I think this was on Android,
but it may have been iPhone.
It may have been both.
I can't remember.
But either way,
like I forget the exact details of the
story,
but they purposely found ways to get out
of the sandbox and bypass the protections
built into the device to spy on the
other apps on your phone.
This is the same company who said,
we don't want to be responsible for this.
And I think that should be extremely,
extremely telling.
Thank you for coming to my TED Talk.
Tip your servers.
Yeah, that's all I got.
Nice.
Yeah.
So I guess with that being said...
I guess in a minute we can start
taking fewer questions.
We've already kind of had a couple here.
So if you've been holding on to any
questions about any of the stories we've
been talking about so far,
go ahead and start leaving them.
You can either leave them in the chat
or you can also leave them in the
respective forum thread for this live
stream.
And for now,
let's check in on our community forum.
So there's always a lot of activity over
there.
And this week was no,
You know what I'm trying to say?
No exception.
No exception.
Yeah, this was a very, very busy week.
So I guess this first one here is
there was a Visa card vulnerability.
So if you haven't seen this already,
there was a video from Veritasium,
which was just a quickly...
recap what the video was about basically
they did a collab with uh mkbhd where
they basically had his uh phone and they
were able to extract ten thousand dollars
from his credit card um without any input
from him like they just had his phone
and they were able to extract ten thousand
dollars
Um, so that was kind of concerning, uh,
definitely a very interesting video.
I haven't had a time to watch the
entire thing yet.
Um,
cause this week has been incredibly busy,
but, um,
Definitely worth checking that out.
And I'll just highlight Jonah's comment
here because he did watch the video.
I just finished watching this video a
minute ago.
I knew this would be express transit
related,
but this interplay between that and Deezer
is interesting.
So basically the way that this kind of
exploited it is it uses this thing called
express transit mode,
which
I mean,
I can't comment on if this is in
the US quite a lot, but in Australia,
like it's kind of very common.
So basically when you tap onto public
transport,
you can basically use your phone to tap
on,
but you'll have to also authenticate
yourself.
Usually that's how it normally works.
But if you enable express transit mode,
it actually just allows you to tap your
phone without authenticating at all.
So that's why this becomes a bit of
a problem, right?
Because it would be fine if you had
to authenticate and then it goes through.
But basically this exploit was able to
basically extract ten thousand dollars
from MKBHD's phone and
without him verifying anything.
And apparently this was due to a
vulnerability in Visa.
They didn't have any,
they didn't like cryptographically check
the transaction or something.
I'm not entirely sure of what the
specifics are behind that,
but there was some more people saying,
there was another post here from Jonah
asking whether express transit mode was
enabled by default with a credit card on
their device.
I mean, I can comment on that,
that it is a specific thing you need
to enable.
And sometimes it does.
We don't have public transit in the US.
I thought you did.
I thought you did.
Oh, is it private?
We do, but it's pretty garbage.
So for all intents and purposes, we don't.
Years ago when we were still dating and
we first started living together,
my wife had a job that was maybe
about a twenty minute drive by car.
And long story short, I had had...
The particular place we lived at,
it was really easy for me to get
downtown to go to work.
And so I was like,
you should try the bus one of these
days.
Just try it.
You don't have to drive.
You don't have to park.
It's really handy.
It took her three hours by bus.
And she never did that again.
So our public transit in the US is
absolute garbage.
But continue.
Oh, I see.
OK.
I definitely have seen some public transit
stuff.
I mean,
I think there's definitely some places
where it's a little bit better,
from what I've heard.
Yeah, like New York is okay.
I had good experiences in San Francisco,
although I know everybody who's from those
places are just like, really?
But yeah,
it's definitely not great in most places.
Okay, right.
I don't know.
We've basically had this massive blitz
from Apple in Sydney where they were
basically saying, like,
use Apple Wallet to use transit.
Use Express Transit mode to speed up your
commute, like all these ads from Apple,
which is kind of funny because...
Now we're learning that there's a
vulnerability with visa cards and express
transit mode, um, which, yeah, um,
I personally enabled it once and I
accidentally tapped onto some transport
twice and then I disabled it because,
yeah,
you probably don't want it to
automatically activate like that.
But it is something you have to opt
into and it is part of the flow
when you set up a credit card in
Apple Wallet.
I do wonder if this...
could also be exploited on Google.
Um, it does say in this video,
there's a picture of a Google pixel in
the thumbnail and, um, it says safe,
but I believe express transit mode is also
available on Google wallet as well.
Um,
but maybe there's more checks going on
there that secures that better.
Um,
But yeah,
there were some comments responding to
Jonah's thread there saying that their
credit card wasn't enabled by default with
this feature.
So unless you accidentally enabled it or
did something in the setup process,
then it's probably not enabled.
I think this was kind of unfortunate for
MKBHD because he just got ten thousand
dollars removed from his credit card.
Obviously, they gave it back.
But, you know,
it's like imagine if he wasn't in a.
Imagine if that was an attacker,
that would be ten thousand dollars stolen
and all you'd have to do is steal
someone's phone.
So, you know, I think.
trying to reduce the things that thieves
can do with a mobile device is good
because it makes it less likely to be
stolen.
Um, I don't think stealing an, an,
an iPhone or a Google pixel or any
of these other devices is a very good
idea.
You're basically stealing a tracking
device at that point.
so yeah.
Um,
Definitely an interesting thread there
with some discussion.
Do you have any thoughts on this one,
Nate?
Yeah,
I think it was really the video that
everybody found interesting.
But it seems that this is primarily
limited to Visa cards.
Like, again,
that comment you were looking at from
Jonah, he said...
Again,
I didn't watch the video either because
it's been a busy week.
But he said,
I'd have to agree with Apple that this
is primarily a Visa issue.
But Visa's point that it is not worth
fixing is probably accurate too.
So I definitely want to try to watch
the video this weekend.
But yeah,
I was kind of asking Jonah a little
bit more about this before we started
streaming.
And there's not really any defenses at
this time other than just to disable the
–
the automatic transit or whatever it's
called, the express transit.
And so it's just kind of a reminder,
I guess that like privacy and well, yeah,
privacy and security and convenience are
almost always i i want to push back
on always because i think there's actually
been a few times that privacy and security
have actually made my life more convenient
but definitely ninety plus percent of the
time they are on opposite ends of the
spectrum with each other and that's kind
of part of a threat model right is
you have to ask like what am i
trying to protect who am i trying to
protect it from how much trouble am i
willing to go through to protect this
thing and i think
I don't really use a lot of tap-to-pay
stuff myself,
mostly just because my phone doesn't
support it.
So I can't say for certain,
but I would have to imagine that for
most people, it's pretty like...
It's probably not the end of the world
to disable this express transit.
Sure, it'll slow you down a little bit.
And I mean, I also have to ask,
again, I didn't watch this video,
but genuinely asking,
how easy would this be to pull off?
Because just because it can be done,
I mean,
we can put people on the moon.
Kind of hard.
We haven't done it a whole lot.
So, you know, it's the same thing here.
Like,
just because this can be done doesn't
necessarily mean that it's something that
you have to worry about every random
person on the street doing this.
So,
if it's something that's very unlikely and
you're in a really,
really busy area where it's like, no,
dude, that extra, like,
two seconds it would take me to do
this would actually kind of add up over
time and get really annoying.
Like, okay,
maybe it's worth leaving it on.
But if it...
if it's not really going to impact your
life,
it's probably better to err on the side
of caution.
And somebody also said here that
MasterCard has resolved this issue and
Visa stands on that this is possibility of
this to happen is so small.
I agree with you.
If it's one of those things where it's
like,
we know there's a solution and there's
really no reason not to do it.
I mean, that's what I'm basically saying,
right?
Like if you have no reason not to
turn the setting off,
then just turn it off.
And I agree with you a hundred percent.
Like if Visa could easily fix this,
then they really should.
But yeah,
It doesn't sound like they're going to do
that anytime soon.
So unfortunately, it's on us.
As usual,
it's on us to care about our own
privacy because these companies do not,
or our own security in this case,
because these companies do not.
So I think that's kind of my takeaway
from that one.
Yeah,
I just want to highlight Pineapple
Express's comment here.
Transit...
That's a good comment.
Yeah.
Thanks for commenting.
Transit.
Thanks for adding to the discussion.
But I think, yeah.
Wasn't pineapple express a type of weed in
a movie?
Sorry.
Possibly.
It's an old movie.
Yeah, it's definitely,
I feel like it's definitely some
references in the chat usually.
But yeah, I think, yeah, I mean,
I think it's like,
so the process between like,
I feel like it's the...
the process between authenticating and
tapping is like so little that it's like,
really, like, are we really doing,
is this really necessary?
Um,
so I feel like it's not really that
much of a concern.
Just disable it.
Just don't use this feature.
Like it's, I don't know.
You kind of know when you're going to
get off transit, you know,
when you're going to get off a train,
you know,
when you're going to get off a bus,
a ferry, whatever.
Um, so, you know, just
time it with how you're doing it,
just authenticate and then tap.
I think that's the easiest way to get
out of
falling into this issue.
But I guess there was also another thread
here from someone talking about airplane
mode on Graphene OS.
I'm just going to read their comment.
I'm not going to mention their name for
privacy reasons.
I think people should be aware that
airplane mode on Graphene OS doesn't
completely turn off the SIM as you can
still receive and make calls over Wi-Fi,
a technology known as VO Wi-Fi.
I am not certain about it,
but I think this means your ISP can
know your location,
at least when you stay home.
VO Wi-Fi might only work on router from
the same ISP as your mobile.
You can disable it in the SIM settings.
This is interesting.
Do you have any thoughts on this?
I don't even know if this is a
thing in Australia.
So you guys don't have airplane mode in
Australia?
I mean, the VR, VR, wifi.
Oh, voiceover wifi.
Yeah.
I don't know if we have that here.
Like I know there is, um,
in a lot of phones,
there's a setting to enable wifi calling,
uh, which maybe that's the same thing,
but, uh,
maybe voiceover wifi is like the protocol
that enables that.
And that's just what the settings called
is enable wifi calling.
But yeah.
Um, no, I think I,
I wanted to highlight this because, uh,
we do recommend on privacy guides to use
airplane mode whenever possible.
And, um,
I think I just really wanted to point
out that this is one of those things
where it's like it's kind of a very
niche, like a more advanced thing,
but it's still something that's good.
Like it's always good to have things on
your radar, right?
It's always good to have that information
and make decisions accordingly.
So here's actually one of the comments
that we wanted to highlight to kind of
explain this.
Disabling your SIM does not...
Where does it go?
Airplane mode is intended to disable
cellular radios, not your SIM,
and is well documented on how it works
on every mobile OS.
I think they said that Graphene documented
that.
Graphene has really good documentation.
They said, likewise,
disabling your SIM does not disable your
cellular radios,
and your device will still ping cell
towers unless you enable airplane mode.
It's in the name really airplane mode
exists solely to comply with regulations
requiring cellular radios to be completely
turned off.
The privacy factors are a side effect.
So basically I think what,
what they're saying is that if you enable
airplane mode,
you are turning off the radios,
but not necessarily the SIM card itself.
So if you do have other things turned
on like voiceover wifi, then that is a,
potentially, if you're not using a VPN,
for example,
I'm assuming a VPN would beat that because
it's voice over Wi-Fi.
I mean,
it's a really good thread because a lot
of people talked about,
apparently on some phones,
the voice over Wi-Fi still goes outside
the VPN,
but Graphene tries to send everything
through the VPN as much as possible.
So it's one of those things where, again,
I think this is probably...
more extreme privacy thing.
I think it's probably not going to make
or break most people,
but it's still definitely something that
you should know of and you should be
aware of.
And if that is part of your threat
model, you should factor that in.
It's good information to have because I
was also kind of under the impression that
I don't know.
I think I was kind of under the
impression that turning on airplane mode
would kind of turn off the sim,
or at least,
I don't know what I was under the
impression, to be honest.
But it's definitely something interesting
to keep in mind, for sure.
And yeah, I see you highlighted,
Jonah said that voice over Wi-Fi and
enable Wi-Fi calling are the same thing.
So good to know.
OK.
Yeah,
I've never heard it called VO Wi-Fi
before.
I thought it might be a different thing.
But yeah, Wi-Fi calling, we do have that.
Yeah, same here.
I think one thing as well is airplane
mode.
As far as I'm aware,
like if you make a call to emergency
services,
it still connects to the tower as well.
So yeah,
I think I'm not entirely sure if it's,
I'm pretty sure the whole point of
airplane mode was to stop signals coming
out of the device when you're in an
airplane.
So yeah,
I guess that's fine,
except if you launch an emergency call,
I guess.
I guess there's maybe laws that have to,
that say it has to be bypassed for
emergency situations.
I'm not sure.
Yeah, and to be honest, I didn't,
I don't know how it works in terms
of bypass.
Like, I don't know if, I'm assuming not,
just based on the true crime stories I've
heard.
I don't know if cops can like,
still continue to track you like okay
obviously what i'm saying is if i have
airplane mode on and i call nine one
one yes it's gonna go through um what
i don't know is can they reverse that
could the cops just surreptitiously decide
to figure out where i am when i
have airplane mode on my money says no
but i could be wrong on that one
um yeah i don't know it's interesting
stuff i think one thing nah that's not
not really relevant i was going to talk
about uh the the different
triangulation with a cell versus wifi,
but I don't think that's really relevant
to this.
So it's, it's interesting stuff though.
Like I said,
I think it's one of those things that
if you guys have some time,
definitely go check out that thread and
just kind of give it a quick browse.
Cause it's, it's,
it's not a very long thread.
I think there were only what,
like not even ten replies or something.
And so it's just one of those,
like the more, you know, kind of things.
On that note,
we're going to take viewer questions and
we're going to start with the questions on
our forum from our paying members.
You can become a member by going to
privacyguides.org and clicking the red
heart icon in the top right corner of
the page.
Or I keep forgetting,
we also have privacyguides.org slash
donate, which will take you right there.
Um,
so we only had one question this week
and somebody said that privacy guides
currently does not recommend to enable the
tell websites not to sell or share my
data feature in Firefox.
Should we enable this?
If so, is it still worth enabling?
Even if you don't reside in a jurisdiction
that makes GPC opt out functional,
but more of a statement of preference.
So, um,
I have a lot of beef with Mozilla,
but one thing I will give them,
it's both a pro and a con is
yes.
If you click the button that says tell
websites not to sell or share my data,
I, on Firefox,
that does not enable do not track.
That enables GPC.
And on the one hand,
I wish they would make that a little
bit more obvious.
I did have to dig into the documentation
to learn that.
But on the other hand,
the average person probably doesn't know
the difference anyways.
So what does it matter?
Crap, I'm out of water.
So as I understand it,
and someone please correct me if I'm
wrong,
I don't think there's a drawback to
enabling GPC.
In the past,
Do Not Track had this thing where when
you enable Do Not Track,
it basically did something in the headers
that ironically made you stand out more.
It created a header that wasn't there,
and that was one more data point they
could use to track you.
And since there was no legal enforcement
behind it,
a lot of websites straight up say in
their privacy policy, they're like,
we do not respect Do Not Track requests.
which is crappy,
but at least they say it.
So...
I don't know.
What I was told is that the way
that GPC works is somehow more privacy
respecting.
And I don't,
the technical stuff goes over my head.
I don't understand how,
but it's one of those things where they're
not supposed to be able to track you.
Like that was a lesson learned from Do
Not Track is now we've implemented this in
a way where it cannot be used as
another fingerprint data point.
So even if you're not in an area
where GPC is required,
as far as I know,
it still doesn't hurt to turn it on
And, you know, if they don't,
it's one of those things where, you know,
a lot of people say like,
there's no point.
Sorry, let me back up.
So I was told by a lawyer one
time that if you do not interact with
a cookie banner,
companies are supposed to treat that as
the same as saying, don't track me.
And they're not supposed to track you.
They're not supposed to put the cookie
there.
a lot of people will argue that like
the cookie banner doesn't really matter.
And they're just going to track you
anyways.
It's one of those things where like,
in my opinion,
it doesn't hurt to say no,
because it just, I don't know.
I'm having a hard time with words tonight.
It just doesn't hurt is what I'm getting
at.
As far as I know, if,
if it doesn't,
If the company's not going to respect it,
they're not going to respect it
regardless.
But if they do respect it,
it's not going to make you any more
fingerprintable.
I know I remember,
I wish I could remember what it was,
but there was a period where I was
like going to websites and I would keep
seeing a little pop-up just for a second,
a very non-intrusive pop-up.
Imagine that, crazy.
That just said like, hey,
we saw your browser has GPC.
We respect that and we're not tracking
you.
And I was like, holy crap, that's awesome.
I haven't seen it a lot lately,
but yeah.
So as far as I know,
in my opinion,
I think it's totally worth enabling.
Um,
Jonah said we'll have to make a video
or something explaining it more.
So, uh, he didn't say I was wrong,
so that's good news.
I think,
I think I was right about that.
Yeah.
I mean, I agree with all points.
Well said.
Um,
I didn't really have anything to add to
that.
Um, yeah.
Cool.
That was our only question in the forum.
The only other one person said that Chrome
is planning to add the GPC toggle this
year.
We still don't recommend Chrome.
Someone else said that it is enabled by
default in LibreWolf, which makes sense.
And we do have one question in the
comments so far.
Swisskill is asking about any router
recommendations in the EU after the US
banned foreign manufactured devices.
I don't have any reason to believe that
there's any backdoors.
I don't... Okay,
I'm going to be a little political here.
A lot of what the administration is doing
does not make sense,
even to a lot of us Americans.
Some of it does, I will say.
That doesn't mean I agree with it,
but some of it does have a logic.
Some of it very much looks like somebody
just woke up and decided something one
day.
And this is one of them where there's
no...
as far as we know,
at least there's absolutely no evidence to
suggest that any of these routers,
cause they're all like,
if you go back and watch, we,
we made this our headline story when this
happened on the podcast.
Um, so go out,
go back and check that one out.
I don't know what episode that is,
we don't know of any existing backdoors.
All routers are currently foreign
manufactured anyways.
So this whole idea of like the U
S is banning foreign manufactured routers.
The U S is banning all routers,
basically a quick update,
actually net year finally got their first
exemption.
That was almost one of the stories we
covered, but a pretty crowded week.
So we decided that one was the weakest
one, but I don't know, personally,
I wouldn't worry about it.
What I would focus on instead is looking
for a router that's,
um,
compatible with open source firmwares like
open WRT.
Um,
I've had good experiences so far on fresh
tomato is still working great for me.
DDW RT used to work really great up
until about a month or two ago.
Um, so yeah, I would,
I would focus more on like looking for
an open WRT router or something similar
personally.
That'd be my recommendation.
Yeah,
I feel like the big one that I
see a lot of people using is the
GLInet routers,
which I believe they all come with.
Well, not all of them,
but the majority of their more reasonably
priced ones support OpenWRT,
and they also have their own spin of
OpenWRT, which is what it comes with,
which is a bit more user-friendly because
OpenWRT is...
It allows you to do a lot,
but its interface is not the greatest,
let's just say.
I'm not like any networking expert,
but I have had issues with configuring
stuff properly because I'm not really...
super network savvy where like, you know,
it's so easy on like GLI net or
like DDWRT or like fresh tomato to
basically, um,
you know, set up separate networks,
set up VPN connections,
all that stuff is a lot easier on
those.
Um,
so GLI net is one that I see
recommended a lot.
Um, I don't know if this,
this probably a pretty regional thing,
but we have Dray tech.
Uh, I think they're a Taiwanese company,
but a lot of their routers also support
open WRT.
Um, yeah,
I can't really think of too many, uh,
other companies that I would, I mean,
I guess there's,
Yeah, I mean,
I can't really think of any European
companies that make routers, really.
Can you?
I think there's one.
Oh, my God.
Jonah and I talked about it because I
remember the subtitles got it right,
and I was like,
I've never heard of this company.
And so I had to ask him if
that's Microtech or something.
I think they're like a Finnish company.
Everybody's going to be so offended that I
can't keep my European country straight.
Microtech is Taiwanese company.
No, no, no.
There's another one.
There is.
God, what a...
Yeah, Microtik.
Yeah, Pineapple Express got it.
It's not like that.
Latvian.
They're Latvian.
That's who they are.
Okay.
I knew they were European.
Apologies to Latvians.
But yeah,
so they're a Latvian network equipment
manufacturing company.
I don't know much about them,
but I remember Jonah mentioned them when
we were talking about this story in the
first place.
Yeah.
And I also just wanted to say,
I checked,
because I know we have a page about
routers.
OpenWrt and OpenSense are currently our
two top recommendations.
So if you can find something that's
compatible with those,
that would probably be your best bet.
Yeah.
I mean, you can also buy the...
I got the OpenWrt one, which is like...
It supports the OpenWrt project.
But again, that's...
as far as I'm aware that was coming
from China.
So, you know, Oh no, I guess,
but I feel like everything's made in
China.
So I feel like that's,
I haven't heard of a EU made router
or anything.
So,
Yeah, I was going to say,
that was kind of the point that Jonah
and I kept harping on when we talked
about this story,
is that there are no American-made
routers.
They're all made in China,
except for apparently there's one from
Starlink,
which I'm sure is a total coincidence.
But anyways, so I mean,
this whole idea of like, yeah,
I don't know.
And I don't know if Europe's any
different, but here in America, for sure,
there are no
made in American routers.
Like there's some of them are designed
here from American companies like Netgear
and Cisco,
but they're all manufactured and assembled
in China or overseas.
So, yeah.
I am seeing some interesting stuff about
MikroTik.
Um,
apparently a lot of their stuff isn't made
in China now.
It's made in other countries.
So that is interesting.
Um,
so I guess we're seeing a lot of
companies kind of divesting from,
or at least trying to, uh, I guess,
uh, what do you call that word?
Like have multiple bases of manufacturing
diversifying.
Thank you.
I don't know what it is today.
I can't find any words that I'm going
to say.
Me either.
Words are hard tonight.
But yeah, so yeah, I mean,
it's good to see that there's more stuff.
I mean,
I think it's still like the national
security concern is probably still the
same, right?
Like Vietnam or like Malaysia.
I mean,
there's still the possibility of them
being.
doing something sus,
but I think it's probably not that likely.
I mean,
I haven't seen any evidence that there's
been any routers that have been tampered
like that from, like,
any of these big American companies.
So I'm not sure how much of a
risk that is.
And just to point that out, yeah,
it's like we – first of all,
we don't have any evidence that there's
been any issues.
This is all stuff we went over in
the show.
And I think the bigger concern would be
like the cheap off-brand stuff or like the
knockoff stuff because we have seen –
I don't know about routers specifically,
but we have seen like Android TVs.
Like if you buy the really cheap Android
TVs on Amazon,
we've seen articles that talk about how
like, yeah,
a lot of them come preloaded with malware
and they run botnets and stuff like that.
So I think if you're getting a good
reputable name brand router from a
reputable source,
i don't think there's really that much to
worry about and then i think if you
want to go the extra mile and be
extra safe which of course we always
recommend then you should put something
like open sense on there um i i
definitely want to get the open sense one
next time i buy a router i have
been very excited about that project i
think it's really cool um i just my
current router still has a lot of life
left in it so i'm not ready to
do that yet but um yeah i don't
think it's a huge i
I really disagree with the government on
this whole like it's a risk thing because
it literally is just trust me, bro,
I said so.
And not to get too far off topic,
but that's an issue I've always had.
Like I've literally met people that when I
talk about privacy, they're like, well,
I have a buddy who works in national
security and he says like they've stopped
so many bad things.
And I'm like,
then your buddy needs to come forward and
tell us about that.
Because right now,
every study we have says that mass
surveillance has literally never done
anything
and always makes things worse than better.
And so if it is actually making the
world a better place,
we need to have that information so that
we can have this debate in good,
honest faith.
Because right now it doesn't seem like
that's the case.
And so that's how I feel about this
whole like router ban.
It's like, oh,
these things are national security risk.
Where's your evidence?
Because right now there is no evidence and
you sound like an idiot.
So yeah, I don't know.
That's my opinion.
Yeah, I think, yeah, I don't know.
I don't know what it's like in the
US really that much.
But in Australia, there's a lot of, yeah,
fear mongering about that sort of stuff.
How we need to have more laws to
see criminal stuff.
I mean,
we have the assistance and access laws.
which basically means that police get
access to stuff without a warrant and
stuff um you know i think there's plenty
of countries that are doing a similar
thing um i just want to quickly uh
circle back to glinet uh apparently i mean
i don't i don't really research this
because i don't own a glinet one i
just see it that's what a lot of
people use um it does look like they
are
based in at least according to their
websites uh one of their offices is in
hong kong and the other one is in
shenzhen um so i guess just be aware
of that if that's a concern i mean
i think basically all these router
companies are even the open wrt one is
manufactured and like done in china so i'm
not really sure what the risk is there
um
against another company I think GLINET is
very reputable so uh what's someone saying
uh Sino Sinobu Sinobu it's actually worse
in China and North Korea they're
constantly tracked yeah yeah so like in a
lot of these countries there is I'm not
sure about North Korea but I know I've
definitely seen stuff in China with like
you know the mass surveillance they have
they have
like more cameras than people right like
well not more but like they have a
lot of cameras um if you've ever been
there's like cameras literally everywhere
um it'll be kind of striking thing to
see um so I think yeah we obviously
we don't want to have cameras literally
everywhere tracking everybody or at least
recording what everyone's doing um
So yeah, I don't know.
It's kind of been a thing where I
feel like a couple of years ago,
people were kind of making things about
how China had a digital ID system,
and it was super dystopian.
But now we're like, oh, no,
let's introduce a digital ID bill.
It's like, guys,
what about what you were saying a few
years ago?
What's happening?
I've literally seen some politicians here
in the US point out,
or maybe not the politicians,
but I've seen people point out,
they're like,
this is literally the stuff we criticize
Russia and China for.
Why are we doing this?
So yeah, it's not cool.
Yeah, it's kind of frustrating.
But yeah, I mean,
is there any other comments you can see
here that we haven't already got to?
No, I haven't seen anything.
Looks like everybody's been a little bit
quiet this week,
but we still appreciate you guys tuning in
and watching, even if you're lurking.
Thank you for listening.
All the updates from This Week in Privacy
will be shared on the blog every week,
so sign up for the newsletter or subscribe
with your favorite RSS reader if you want
to stay tuned.
I want to remind you guys,
we send the newsletter at the same time
that we go live,
so it also works as a really good
reminder that we're going live.
Little notification there.
If you prefer to listen on audio,
we also offer a podcast available on all
platforms and again on RSS.
And this video will be synced to PeerTo.
Privacy Guides is an impartial nonprofit
organization that is focused on building a
strong privacy advocacy community and
delivering the best digital privacy and
consumer technology rights advice on the
internet.
If you want to support our mission,
then you can make a donation on our
website, privacyguides.org.
To make a donation,
click the red heart icon located in the
top right corner of the page.
You can contribute using standard fiat
currency via debit or credit card,
or you can donate anonymously using Monero
or your favorite cryptocurrency.
Becoming a paid member unlocks exclusive
perks like early access to video content
and priority during our Q&A.
You'll also get a cool badge on your
profile in the forum and the warm,
fuzzy feeling of supporting independent
media.
So thank you again for watching and we'll
be back next week.
okay um all right so oh do you
have to go okay oh god my phone
is almost dead no