Welcome back, everyone,
to This Week in Privacy,
our weekly series where we discuss the
latest updates with what we're working on
within the Privacy Guides community,
and this week's top stories in the data
privacy and cybersecurity space,
including allegations that WhatsApp is not
end-to-end encrypted,
France in the UK restricting some online
tools for minors,
TikTok's new US ownership, and more.
I'm Jonah, and with me today is Nate.
How are you doing today, Nate?
I'm good.
I'm good.
How are you?
I'm doing excellent.
Thank you.
For those of you who don't know,
Privacy Guides is a nonprofit which
researches and shares privacy-related
information,
and we facilitate a community on our forum
and matrix where people can ask questions
and get advice about staying private
online and preserving their digital
rights.
Before we dive into our first WhatsApp
stories,
I want to give some quick updates with
what we've been working on at Privacy Guys
this week.
Why don't I start off by handing it
over to you, Nate,
to talk about the video side of things?
Sure.
There's not too much new with the videos.
Let's see,
the smartphone course for Android that
We're adding on to this,
so it's hard for me to know how
to describe it.
The smartphone course we're doing,
the intermediate tier,
the Android section is done.
And I believe we're just trying to work
out some technical issues with PeerTube.
Once it's on PeerTube,
we will be posting that for members.
The iOS version we're hoping to have done
next week.
And Jordan has begun editing the private
browsing video that I've been talking
about.
And that will hopefully also be coming out
here
soon and in the meantime i have moved
on to scripting a video about private
messaging so i'm excited to share that one
with you guys and um again we've just
been putting out a lot of clips we
started putting out um
horizontal clips as well,
like regular aspect ratio clips on the
Privacy Guide Shorts channel.
So I know a lot of you guys,
it's a really common thing when we do
these kind of news shows that people want
something where they can share it quickly
and easily in just that story with people.
So definitely check that out if that's
something that you would like.
nice um in other privacy guides news
things have been again being pretty active
on our forum lately lots of good
discussions going on i know that you nate
and freya as well have been working on
a lot of news brief articles lately um
so those have been coming out um other
stuff is still being worked on again
behind the scenes but i know i've been
talking to em about a big project that
She has been working on for the past
few weeks now,
and that's coming out relatively soon.
Hopefully,
within the next few weeks or so,
we'll have more updates to share with you
on the stream about that.
But yeah, lots of progress is being made,
lots of big plans for the site and
for the videos in twenty twenty six,
especially as we get into the new year.
I feel like a lot of people who
have been working on all this stuff have
been feeling pushed pretty hard lately.
We've been doing a lot of work,
but hopefully it all pays off and people
like it and we can reach new people
with all of this privacy stuff.
But in terms of specific updates,
I don't think we've pushed a new release
of the website on GitHub or anything like
that.
So no changes to the recommendations or
anything so far.
But yeah,
all of that stuff is still being worked
on in the background.
And if you are hoping to see something
in particular, definitely join our forum,
join the community and talk with us a
bit about what you want to see because
a lot of the stuff that we're doing
is really
built on this community and what you all
want to see and what would make the
most impact in the privacy rights space.
With all these updates out of the way,
I think we can move on to some
of the biggest news stories that we've
seen in privacy and security in the past
week.
I know you wanted to start off with
the headline story here,
so why don't I pass it off to
you, Nate, to talk about that.
Yeah.
Sounds good.
Let's talk about WhatsApp.
So, uh, WhatsApp for,
I'm sure most of our listeners know,
you know,
it's a encrypted messenger brought to you
from Meta,
the same people who make Facebook and
Instagram and well bought Instagram.
And, um,
Yeah, it's WhatsApp, as far as we know,
is end-to-end encrypted and it uses the
signal protocol.
So there are a lot of concerns about
the metadata collection of WhatsApp.
But up until now, we've always believed,
well, the content itself is encrypted,
which is better than nothing.
Although there is now a new lawsuit that
alleges that, no, actually,
that's not the case.
And they don't mean that in like a,
well,
technically kind of sort of like they mean
it literally like, no,
it is not end-to-end encrypted.
And this lawsuit claims that if you are
a meta or WhatsApp employee,
all you need to do to access the
messages is you send a task, which is,
I guess,
just what they call like their internal
tickets or requests and metas internal
systems.
You send a task to a meta engineer
and you just say, hey,
I need access to these users messages for
whatever reason.
And they say that the engineering team
will then grant access,
often without any scrutiny at all,
and the workers' workstation will then
have a new window or widget that they
can pull up any WhatsApp users' messages
based on the user's ID number.
which is a unique number.
And then once they have the access,
you can read messages.
They say there's no separate decryption
step.
It's just available right there,
which I'll come back to that, I guess.
They say that these messages are
commingled with additional messages from
unencrypted sources.
Not entirely sure what that means.
Maybe they're talking about the DMA.
I think WhatsApp now has to
federate or combine with other third-party
messengers as part of the DMA.
But I could be wrong about that.
I'm speculating.
They also say messages appear almost as
soon as they are communicated.
So this is essentially a real-time tool.
And they say the access is unlimited and
you are able to go back indefinitely in
time to view messages all the way back
to the user's first messages when they
open the account,
including messages the users believe they
have deleted.
So it is important to note that this
lawsuit does not provide any technical
details to back up these claims.
They say that there were some courageous
whistleblowers and –
Yeah, I mean, obviously,
Meta is disputing this.
They say that these claims are, quote,
categorically false and absurd.
And they even say WhatsApp has been
encrypted using the Signal protocol for a
decade.
So, yeah,
this is definitely a big if-true kind of
moment.
And it's very concerning because WhatsApp
is...
incredibly popular around the world here
in the U S not so much,
but in other parts of the world,
in Europe, in Asia,
it's incredibly popular.
And again, up until now,
I want to reiterate,
we have concerns with WhatsApp.
I'm not saying it's great and you should
use it, but at least it was like,
well, you know,
at least the messages themselves are
encrypted.
And that's something that's more than we
can say for SMS or anything like that.
And apparently we can't even say that now,
uh, potentially.
So yeah,
The only other thought I wanted to point
out is I mentioned the whole widget thing
where they say there's no separate
decryption step.
In theory,
that doesn't necessarily mean the messages
aren't encrypted because maybe the
decryption is happening within the widget.
However,
the whole point of end-to-end encryption
is that that shouldn't be possible
regardless,
whether they're being stored in plain
text,
whether they're being stored encrypted.
The whole point of end-to-end encryption
is that the only people who should have
access are the ends.
And the server is not supposed to be
one of those ends, cough, cough, Zoom.
Sorry,
I had to take a shot at them
for that one.
But yeah, like I said,
this really is big if true and would
really be bad because of WhatsApp's really
large user base.
I think that's a really good point you
just said about how the server shouldn't
be one of the ends,
especially because we know with WhatsApp
in particular,
but also with some other end-to-end
encrypted messengers,
Most notably, iMessage,
unless you have advanced data protection
enabled,
even if end-to-end encryption is working
perfectly fine,
very often they will have these backup
features which are not end-to-end
encrypted,
and that potentially acts as a backdoor
for service providers to get into it.
As far as I know,
that is the case with WhatsApp,
and that is a potential way that this
could be true without the...
end-to-end encryption of the transmission
itself being broken.
Maybe they have a way to access these
backups easily.
But again, that's speculation.
I think it's important to remember with
this story,
and actually one of our community members
just left a comment about this as well,
which is that this is...
a legal complaint right now it's not at
the stage where any evidence has been
presented at all there's no technical
evidence within the document that's been
shared that demonstrates any sort of back
door or that there's any sort of
compromise with the encryption of whatsapp
um so
That being said,
with an app like WhatsApp that's closed
source and completely under the control of
Facebook, this is always a danger,
especially because Facebook has this
history of extensive metadata collection,
extensive...
you know, just general data collection,
actually.
And they are a company that's built
entirely on this like data driven
advertising model where collecting as much
data as they can is really paramount to
their business.
That creates a situation where it's very
hard to trust that they've implemented
end-to-end encryption correctly,
that they're not trying to weaken it
behind the scenes,
or that this is completely impossible.
So I don't think that this is...
out of the question.
But again, this hasn't been proven.
This just goes to show, I think,
that encryption in these apps needs to be
completely verifiable.
It needs to be open source,
it needs to use standard protocols,
and it can't just be a matter of
trust in the publisher of these apps
themselves.
Compared to a messenger like
Signal, which is open source,
like if this story had come out about
Signal right now, many people,
security experts,
auditors could be pouring over that source
code,
trying to see if there's any way that
this could be true, right?
And that just isn't possible with
WhatsApp.
And that's the danger of using these
proprietary closed source applications
like this for your communications instead
of more secure alternatives.
Um,
the other thing I wanted to say about
this whole WhatsApp story is that even if
this isn't true,
even if they can't read your messages
themselves,
it's well known at this point that
WhatsApp is not doing anything in terms of
preventing the collection of metadata,
which is, um, you know,
data about who you're talking with data
about when you're using the app, um,
all of that stuff.
That's not like the message content
itself.
Right.
And so.
I mean,
there's this famous quote from a U.S.
government official where he goes like,
we kill people based on metadata, right?
Because they don't actually need the
content of your messages.
If people have access to this data,
they can infer a lot about you,
who you talk to.
says a lot about what you're probably
talking about,
especially if you're doing it on a regular
basis or anything like that.
And all of that can be determined without
breaking end-to-end encryption at all.
And that's part of why I think WhatsApp
is such a dangerous application to use
because none of that metadata is
protected.
And Facebook is the last company on earth
who I would trust with that metadata in
question.
So...
Even if it's not true,
I would really encourage people to not use
WhatsApp personally.
But yeah, if it is true,
that is even worse.
We will have to keep an eye on
this story for sure because it is a
big if true moment.
Absolutely.
Yeah,
I agree with everything you just said.
This is one of the reasons, like,
we know that open source is not the
end-all be-all.
It doesn't guarantee that something is
private or secure.
But like you said,
if this was an allegation made against,
like, Signal or SimpleX, like,
this wouldn't even really be a story
because we could, I mean,
I certainly could,
and I don't know enough code for that,
but
we as a community could easily just go
pour through the source code and be like
yeah that's not what's happening here we
can we can prove that's not happening um
but yeah and and um i i i
do thank that listener for pointing out
like yes these are allegations they
haven't presented any evidence i will be
really interested to see what sort of
evidence uh they present if any um
And yeah, it's, and like you said,
the metadata is so, so, so important.
The EFF has an amazing page where they
talk about the importance of metadata and
they use some examples, like, um,
some really sensitive examples.
Like if you call the suicide hotline and
sorry, I probably just got us demonetized,
but you know, you call the hall,
the hotline at two in the morning from
the golden gate bridge.
Do you really need the contents of the
phone call to know what was probably going
on there?
And
Yeah,
I forget who it was that said that
quote,
but that is a really famous quote you
can find very easily with a web search.
And that's what he was saying is exactly
that.
Metadata is so revealing that you can make
a really convincing argument without the
content.
And at that point,
you can authorize a military strike.
Like, yeah.
I mean,
is it possible that something else is
going on?
Sure, of course, but...
Yeah, it's pretty wild.
It's good enough for most people, I think.
For sure.
This is why we encourage things like
Signal, Simplex,
things that are metadata resistant,
are fully open source,
that go above and beyond to protect users
and their data, for sure.
Before we move on really quick,
we did get a question.
How do you convince your peers to stop
using WhatsApp?
Do you have any thoughts on that, Nate?
Send them this story.
I think – okay.
I mean there's – we get questions like
this all the time,
and they're great questions.
But unfortunately,
there is no one-size-fits-all answer.
If we had the one secret answer that
could get people to take their privacy
seriously, we would have used it by now.
But I think –
One thing,
so I'm thinking particularly in the
context of like Europeans and Asians,
like people where WhatsApp is like a
common way to connect with businesses and
stuff like that.
And it's, I hate to say it,
but it's quote unquote kind of a
necessity.
I think for those people, there's,
I forget where I heard this,
but somebody really floated the idea of
instead of trying to get people off
WhatsApp,
trying to get them onto something else in
the sense that like,
You can keep WhatsApp and you can use
it for when you have to contact a
business in Germany or something,
but all your friends are also on Signal
and we could use that too.
And then it turns into a thing where
like,
in my life, I still use SMS.
I still have some services.
I logged into a bank this morning that
texted me an SMS code,
not happy about it,
but there's nothing I'm going to do about
it.
So I still have to use SMS.
I can't stop using it,
but I've got ninety,
ninety five percent of my friends and my
family on signal.
And that's where I do most of my
work.
And so I think I think trying to
encourage people rather than like, oh,
stop using WhatsApp,
try to encourage people like, oh,
we're all over here on signal.
And sorry,
this just popped into my head real quick
while I was talking.
I have had amazing success by focusing on
features.
Like I hate to say it,
but let's be honest.
Most people don't care about privacy and
security enough that that's their driving
factor to move.
It's just kind of a happy bonus.
So my wife used to be a wizard
at this and I swear to God,
she should teach a class.
She was so good at getting people to
switch to signal and she never brought up
privacy and like, you know,
she'd mentioned like, yeah,
it's this encrypted messenger,
but like
It's got bigger, you know,
bigger attachment sizes and we're
comparing to SMS here.
So I don't know how it compares to
WhatsApp,
but like it's got bigger attachment sizes.
We can send gifts,
we can send reactions because this is
before RCS was a thing.
It's like all these amazing quality of
life features.
And I swear to God, five minutes later,
I would get a text from the person
she was talking to like, Hey,
I'm on signal now.
I'm like, damn,
I've been trying to get this person on
signal for two years.
How did you do this?
So, yeah,
I think that's probably unfortunately how
we're going to get like, quote unquote,
the average person to want to switch is
by showing them the quality of life
advantages.
And, you know, yeah, absolutely.
Figure out what signal does better than
WhatsApp.
I totally agree.
You kind of stole the thing that I
wanted to talk about,
so I won't spend too long on it,
but I mean, that is definitely my, my,
I truly believe that like all of these
private alternatives,
pretty much in most of these sectors,
if you really look at them and if
you really start to use them,
they are also quality of life improvements
because
I think people are fed up with technology
and all of this surveillance and all of
these anti-features that like all of our
computers are doing things that we didn't
ask them to do now or AI is
being jammed into them or that it's
popping up.
It was annoying decades ago in Microsoft
Word with Clippy and Copilot is just as
annoying now in all of those products.
People just want functional tools,
I think,
and focusing on that aspect
I think that's probably the best way to
drive adoption of these things because
it's just, it's simpler,
it's more reliable,
and it works better in my experience.
And finding ways that it works better than
WhatsApp and focusing on that rather than
trying to compare like the security
features that they already have,
like you said,
I think that that is the way to
go.
Moving on to our next story here.
This was reported by The Guardian.
French lawmakers vote to ban social media
use by under-fifteens.
So this starts out,
legislation which also bans mobile phones
in high schools would make France the
second country after Australia to take
such a step.
French lawmakers have passed a bill that
would ban social media use by
under-fifteens,
a move championed by President Emmanuel
Macron as a way to protect children from
excessive screen time.
The Lower National Assembly adopted the
text by a vote of one-thirty to twenty-one
in a lengthy overnight session from Monday
to Tuesday.
It will now go to the Senate,
France's upper house ahead of becoming
law.
The legislation,
which also provides a ban on mobile phones
in high schools,
which I think is a great idea personally
as a former educator,
would make France the second country to
take such a step following Australia's ban
for under-sixteens in school.
December.
Social media has grown,
so is concerned that too much screen time
is harming child development and
contributing to mental health models.
And so my big question coming out of
this, I think,
is how they plan to enforce this,
because we've talked a lot in the past
about age verification,
and I know this is a huge issue
in Australia right now,
as is
I mean, this article doesn't mention that,
but they're the first country to really
take an approach like this,
banning not only very young children,
but teenagers from social media.
That's very challenging to do without
these invasive age verification things
that we have always been very concerned.
against because age verification and ID
verification it's not just a matter of
like affecting children um it forces
everyone who's signing up for these
platforms to be verified which includes
adults so there's no opt-out process here
um and that's a very dangerous privacy
concern how these IDs are going to be
implemented in the first place I think and
also um
what data is going to be shared with
all of these platforms.
That's something that we'll have to keep
an eye on.
So I'm not seeing in this particular
article how the French plan to deal with
this question.
I know that this is a pretty common
issue with a lot of legislation like this,
where lawmakers kind of
put some arbitrary goal together without
any steps or plan on how to make
it happen in a reasonable, secure,
and private way.
But yeah,
that's my biggest question out of this
story.
Did you have a chance to look into
this story any more than that, Nate?
No, just the article itself that you read.
If I remember correctly,
I don't have it pulled up in front
of me like you do.
If I remember correctly,
they did say that towards the end,
what you said there, where it's like, oh,
they don't really have a plan for how
they're going to implement this.
That's something they're going to talk
about next week.
I do find that so funny.
Yeah.
My favorite example of this,
New York City did that a few years
ago where they banned the sale of internal
combustion engine cars.
And then like the next year they went,
hey,
where are we going to put all the
chargers for these electric vehicles?
And I'm just like, seriously,
nobody had that conversation.
Come on, guys.
So, you know, yeah,
it's and it goes to show.
just this is something I harp on a
lot personally.
It's like,
I think we need better tech literacy in
general worldwide.
And cause we, we have a lot of,
I know I've said this before,
but we have a lot of elderly people
who, you know, to their defense,
I get it.
Like a lot of them existed in the
days where like color TV was the newest,
fanciest thing.
And now we've got LLMs and that's,
that's a lot to wrap your head around.
But then on the other hand of this
end of the spectrum,
we've got these people who are,
I love to cruise r slash tales from
tech support on Reddit,
but it also really makes me facepalm
because on more than one occasion,
I've seen stories like my Wi-Fi isn't
working.
And then when they're like, okay, well,
are the lights on in the router?
And they're like,
I'm not at home right now.
Well, of course your Wi-Fi is not working.
Or, you know,
I've also seen the ones where they're
like, again, you know,
my computer won't turn on.
And it's like, okay, well,
is it plugged in?
I can't see under the desk.
The lights are off.
You don't say,
and I've seen those stories multiple
times.
And so it's like multiple people,
and that's just on Reddit,
multiple people are having this issue.
So I think my point being,
we need better tech literacy,
at least in the basics.
I'm not saying everybody needs to know how
to code and self-host their own
everything, but just to understand...
That like you were saying,
that's a big thing.
It's age verification.
No, it's not.
It's identity verification.
And just to give credit,
I got that one from Taylor Lorenz.
And, you know,
it's you're going to have to upload your
ID to people watching this in France,
regardless of your age and the UK,
which we'll talk about in a minute.
Like it's not just minors,
because how else are they supposed to know
that you're not a minor?
Yeah.
And a lot of these politicians just think
like, oh, that's a technical problem.
Just as one of my other friends likes
to say,
nerd harder and we'll find a solution.
And it's like, no, there is no solution.
There is no magic bullet.
Technology is not magic.
I feel like on this show,
we've talked quite a bit about age
verification and these ID verification
problems.
And I would definitely encourage people,
if you are unfamiliar with some of those
problems,
with some of this topic to check out
the interview that you did with Taylor
Lawrence,
because I think you really covered a lot
of good stuff that was more focused on
how that's going to affect the US and
some legislation that's going on.
But it really does apply to all of
this stuff going on around.
around the world.
We see it not just in France and
Australia, but the UK, for example,
has very strict ID verification laws.
It's becoming a real problem.
Ignoring the implementation side of this,
Do you have any opinions of your own
on this social media ban for children in
general?
Is that something you support just as a
general concept?
Or what do you think?
I mean,
I have some thoughts on this if you
don't,
but I'll pass it off to you first.
Oh, I have thoughts.
My thoughts... Honestly,
it's complicated because on the one
hand...
I, like most people,
I do not neatly fit into one particular
political label or another.
I have thoughts that are left-leaning and
thoughts that are right-leaning.
And one of my more libertarian thoughts is
that parents should be in charge of their
kids.
And I don't mean that in the sense
of like, well,
parents should just raise their kids.
I mean like parents should have the
autonomy and the freedom to decide if they
think their kids are ready to see a
movie, ready to play a game,
ready to engage with the internet.
I think parents should have that freedom.
But at the same time,
I think the internet is very distinctly
different from a movie or a video game.
Well, maybe not an online game,
but like an offline game in the sense
that the internet is a much,
much bigger place with much more
disturbing content on it.
I'm sure whatever the worst thing you've
seen in a horror movie is,
there's probably something worse on the
internet.
And I think it's a lot to ask
parents to constantly know,
even if it's the most well-behaved,
well-meaning, good kid,
that doesn't necessarily mean that the
people they're interacting with online are
also acting in good faith.
And I think that's a lot to put
parents in the position of having to
constantly try to monitor all of that.
Yeah.
It's tough because I don't want to take
away the autonomy of the parents to make
those choices,
but that's also a lot of work for
people who work full-time and may not
necessarily have the tech skills and
everything.
Um, just one more thing real quick.
Somebody here said in the comments that,
you know,
regulation is how we get clean water,
clean food, you know, safe food.
And it's obviously it's not perfect.
You know,
things get recalled all the time,
but I think we can all agree.
It's a lot better.
The term snake oil comes from the old
West days when people would literally roll
into town with literal snake oil and be
like, yeah,
this will cure your cancer and arthritis
and this, that,
and the other and everything.
And like,
just give me your money and I'm going
to be
Fifty miles away by the time you realize
I ripped you off and you don't have
a way to get me because of the
technology limitations at the time.
And so regulations aren't always bad,
but it's definitely – I don't know.
I think it's a mix.
I think there's pros and cons,
and I don't really know what the right
answer is.
That comment that you pointed out is a
good one because it does sum up a
bit of how I feel about social media,
which is –
If it's such a problem,
I think what we've seen in society is
that this isn't a problem that only
affects children.
And personally,
I don't think that children are...
significantly worse off than anyone else
who's constantly being exposed to these
social media algorithms.
And so from this perspective,
we have food regulation,
we have clean water regulation.
Could we have algorithmic regulation that
applies to all of these users of the
platform to protect ourselves in general
as a society against the harms of social
media?
I think that that could be
an approach because I think what people
don't think about or realize is that the
algorithms that make up something like
Facebook and Twitter are not
like they're not necessary for social
media to function um by the way facebook
you know from a user's perspective was
probably totally fine before they
implemented like the news feed and stuff
and people generally like twitter and the
chronological ordering of tweets from only
people that you follow before you know all
this discovery stuff was baked in and it
really tried to
get you into these bubbles and echo
chambers that I think is causing a lot
of people harm, not just children.
And I think we're focusing on children
because children are, you know,
growing up in this and it's preventing
them from like building the skills that
they need to survive in adult society,
unfortunately,
that most people like adults already have.
But
beyond that like i think the harms to
all people are pretty apparent by social
media and i think that some social media
platforms like um mastodon for example
demonstrate that building communities that
you can interact with in a more healthy
way um
It's possible.
And I think that regulation on that front,
which would make these big tech companies
more
like Mastodon, for example,
and Mastodon isn't the perfect social
media, by the way,
but it's a direction that we could go
in.
We need to, I think,
get back to the internet being a place
where we share information and we share
knowledge and make it less of a place
where we just consume whatever information
the overlords of the internet have put on
the screen in front of us, right?
It needs to be more intentional,
and I think that that's the sort of
thing which could be done through
legislation that doesn't involve age
verification or anything like that,
because I think
Banning algorithms like that is really a
lot like enforcing clean food and water
regulations and that sort of thing.
It's a public health issue at the end
of the day.
I also definitely agree with the sentiment
that I've seen from some people in the
chat and also in this article from some
people that they interviewed,
which is that bands like this,
they are overly simplistic,
as this group said in the article here.
But it's also a form of digital
paternalism.
I think that is true.
It's not really the government's place to
make these decisions, I think,
in terms of parenting children.
And you got into this before.
And it is hard.
Exactly like you said,
there is a balance because there's so much
going on in people's lives.
It's so common for both parents to be
working now.
Some people have to work two jobs.
Society is just crazy at the moment,
right?
And so...
Yes, it's hard,
but I don't think that that should be
an excuse for the government to step in
in this way.
The government should be stepping in and
making people's lives easier so that they
have time to parent their children
themselves, right?
That would be an ideal outcome here.
What if we all made enough money where
we had the time to educate our children
properly?
What if the government tried to do
something about that?
I don't know.
Just a thought.
So, yeah,
I don't think it's when somebody could
actually afford to stay home and be a
parent.
Right.
Now,
I especially I really agree with and Henry
used to say this a lot on surveillance
report to exactly what you just said that
the we keep focusing on like social media
is bad for kids social media is bad
for kids.
social media is bad for everyone.
Yeah.
Social media is bad for me.
I notice it even like when I spend
too much time on social media,
I start to get that FOMO and I
start to, you know,
it really starts to consume me.
And I,
I'm sure that some people are more
susceptible to that than others.
Like I know some people that their
relationship with Facebook is like my
relationship with my phone where like half
the time I'm like, where is it?
I don't even remember.
But
These are companies that are paid full
time to figure out how can we keep
people on the platform longer.
That is their job.
I really want to stress that.
However good you are at your job,
that's how good they are.
It's just not a fair fight is what
I'm getting at.
It's really unfortunate that we keep
focusing on...
This is bad for kids and ignoring the
fact that everyone is impacted by this.
And I think it would be,
to your point,
if we're going to regulate anything,
we need to regulate the companies and the
algorithms and make it less harmful for
everyone.
And then maybe we wouldn't need to resort
to these extreme measures.
The other thing I would say about this
ban is that we are in the...
really early days of the internet still if
you really think about it um and this
was i i didn't really think about this
a lot until i heard um i think
i was watching a hank green video where
he said something to this effect where
like in in terms of like society in
general like this many to many
communication system that we have with the
internet is extremely new and if you
really think about it like most people
have probably only been
in like the social media mass
communication landscape for maybe ten,
fifteen years.
I know like you've probably been on the
internet longer,
like some of us people who have been
into technology have been a bit longer
than that.
But for most people,
it's only been around like fifteen years
and like some whole countries even today
are still like just getting connected to
the internet and just getting phones and
it's just becoming a problem.
Like this is
an extremely new development in society
and I don't think that we know like
what works and what doesn't work right and
I don't think that we've given enough
thought into all of this stuff because
there are so many benefits to to be
honest there's so many benefits to even
social media if it's if it's done properly
um that outright banning it just doesn't
make a ton of sense to me but
clearly something has to be done and I
hope that
other governments outside France and in
Australia try and think about these more
nuanced approaches to how all of this
technology can be improved and a better
tool in like people's lives rather than
just like seeing the problems that these
especially these American big tech
companies have created on the internet and
like
quickly reacting to it and just banning it
outright.
I think there's some middle ground to be
found here that I would really try to
encourage.
Totally agree.
With that out of the way,
in a little bit,
we're going to talk about TikTok.
But first,
we're going to talk about stories from the
UK.
That is correct.
So keeping with the vein of age-gating the
internet,
the UK House of Lords has voted to
ban VPNs for children as the pressure on
privacy tools increases.
So this is...
I mean,
the headline kind of says it all.
The House of Lords,
I'm not intimately familiar with the UK's
legislative system,
but it's one part of their legislative
branch.
I believe they said the House of Commons
is the other one, if I remember correctly.
Yes, that's correct.
Okay, yeah.
So basically,
the House of Lords has passed this.
Now it's going to go on to the
House of Commons.
And I may be mixing this one up
with France, but I want to say that...
The president or prime minister or whoever
has expressed support for this,
so if it passes the House of Commons,
that is probably not good.
But the good news is it says here
the labor government has a large majority
in the commons,
but it's not clear whether it will attempt
to overturn the amendment or support it.
So, yeah,
this may face scrutiny or it may just
fly right on through.
We don't know at this time.
But it says that the vote was passed
two oh seven to one fifty nine and
that within twelve months, VPNs,
let's see,
regulations which prohibit the provision
to the UK children of a relevant VPN
service must be enacted.
And this is specifically in response to
the Online Safety Act,
which has not gone well.
Within, God,
I think within days of the Online Safety
Act taking effect,
there were stories about how a VPN could
get around it.
People were using their parents' IDs.
I think some people were even using
screenshots from video games,
specifically the game Death Stranding.
So yeah, that did not go over well.
There was also, let me see,
if I remember here,
I think there was a second law.
Again,
I may be thinking of the France one.
I read all of these stories yesterday,
so they may have jumbled up in my
mind a little bit.
Um, yeah,
I'm not seeing anything about that.
So yeah, this is a, this is unfortunate.
This is kind of like we were just
saying it.
And I love when governments just pile
band-aids on top of each other.
Like we passed the online safety act.
Oh, that didn't work.
Well, let's, let's ban VPNs.
And then they're going to find a way.
Cause it's a cat and mouse.
They're going to find a way around VPNs.
And you know,
like we were just talking about a minute
ago,
the source of the issue is what harmful
content online.
Right.
So.
Why don't you address the content online?
And to their defense,
some of the stuff that's harmful,
if harmful at all,
is out of their reach.
If a website is based in another part
of the EU, India, America,
they can't really do anything about that.
But I don't know.
This just feels to me like, oh,
it didn't work.
We need to add a Band-Aid.
Yeah, it's a cat and mouse,
so I don't know where they think this
is going to end in its logical conclusion.
And it's not great,
because obviously VPNs are not a total
anonymity tool.
They definitely do get hyped up a little
bit too much,
especially in a lot of sponsor segments.
But they do still have a legitimate use
case, and they are...
I would say they're an easy way to
make some improvements to your privacy.
Like a lot of the VPN providers we
recommend have DNS block lists that will
block known trackers, known ads,
known malware.
It will change your IP address,
which is part of the way that companies
fingerprint you online.
And again, not perfect.
Definitely leaves a lot to be desired,
but it's a great start.
And especially if your threat model is
like you don't want your ISP selling your
internet history,
you don't want your ISP knowing where you
go online, which is totally fair.
Yeah, I do think they serve a purpose,
and it's really unfortunate to see them
losing a major benefit,
which is that you don't need to turn
over ID,
because that kind of defeats the whole
privacy thing, in my opinion.
I think that's about all I got on
that one.
Absolutely.
I saw this story earlier...
this week and I sent out some posts
on social media about it that have been
doing pretty popular.
But basically I was talking about these
VPN bans in
in general,
because I think that a lot of people,
and especially techie people in this
space, hear about bans on technology.
They hear about a VPN ban,
or they hear about a ban on end-to-end
encrypted messengers, like Signal,
if something like chat control were to be
rolled out.
And they think, like, oh,
I can still continue to use these tools,
and...
And I'll be fine.
Even if this affects other people,
I'm smart enough to know how to bypass
all of this stuff,
and it won't be an issue for me.
And that's what we've seen with age
verification rolling out.
A lot of people are just using VPNs,
right?
But I think...
The problem with banning and criminalizing
very common,
very mundane and very legitimately useful
technologies like VPNs, for example,
is that it makes crimes very easy to
commit and very commonplace.
And this is the first step in what
we see in these authoritarian regimes.
regimes where, you know,
they try to fill the books with as
many, you know,
potential crimes or violations as possible
so that even if you're doing something
completely unrelated to the crime at hand,
like if you're using a VPN and the
government decides they don't like it,
like if you're protesting your government,
for example, in the UK,
they can very easily like look at your
technology.
They can look at you
being a VPN user or using this end-to-end
encrypted tool or whatever,
if any of these laws pass,
and they can use that fact against you,
not only in the courts as a crime,
but also in the courts of public opinion,
so to speak,
where they can really label you as
something which you probably aren't.
And people will judge you for that.
And that comes from making these
legitimate tools
seem evil and villainizing them and really
just changing their reputation.
It affects people in a lot of ways,
and it affects people in non-technical
ways.
It's the big point that I wanted to
make here.
So I just... Yeah,
I would be worried about any...
This is the same argument that we had
with Shack Control a while ago,
which I'm sure will crop up again,
but with any of these total bans on
technology...
I just want people to remember that if
you live in these countries,
this is not just a technical issue.
And you need to be keeping an eye
on this stuff and keeping up with it
and speaking out against it because this
will end up affecting everyone.
It's a bit of a slippery slope argument,
but we're definitely at the top of some
slippery slopes right now.
Yeah.
And the other thing I want to add
onto that, that's,
that's all absolutely true.
And you're absolutely right.
A lot of the time we don't think
about the non-technical side of this,
but also I personally, I really hate that.
Like, Oh,
well I know how to get around this.
That's great.
A lot of people don't.
And privacy, you know,
privacy is a team sport and privacy is
a human right.
Like, right.
Like we have that in the merch store.
For those of you who don't know,
we have a merch store
shop.privacyguides.org.
And we have a shirt that's super awesome.
That has article twelve.
I don't,
God, I'm such a nerd.
I have this memorized.
It's the nineteen forty eight United
Nations Declaration of Human Rights.
Article twelve says that everyone has a
right to privacy.
I don't have it memorized.
That's going to be my new project is
I'm going to memorize the actual article.
But it's like it says like this is
a human right.
We're talking about like water, food,
shelter, the right to live,
the right to education and also the right
to privacy.
And so if we're going to believe that,
if we're going to sit here and say,
yes, privacy is a right,
the government is infringing on my rights
by taking away my privacy,
then that's really messed up to say, oh,
well, this doesn't affect me, so meh.
No, everybody should have that right.
Even if you know how to get around
it, lots of people don't.
Yeah.
At this point,
I don't care what form your compassion
takes.
If you're like, well,
then I'm going to teach people how to
get around it.
There may be legal repercussions for that.
I'm not endorsing that.
You do you.
But whether that's I'm going to teach
people how to get around it,
whether that's I'm going to write my
politician, whatever it is,
don't just sit back and go, oh, well,
this doesn't affect me, so I don't care.
Because what's that classic poem about the
Holocaust?
First, they came for everyone else.
And by the time they came for me,
there was no one left.
And
I wouldn't be surprised if that happens in
some places because we keep saying it
doesn't affect me until it does,
and it's incredibly selfish,
and we need to get out of that.
Sorry.
While we're on the topic of these bans
of technology for children,
I saw this comment in our YouTube chat
where they said,
as far as I can tell,
EIDAS will be used for age verification
within the EU.
Basically,
these digital ID systems will allow
websites to request some sort of ID on
your phone or computer and only get
certain information about it in a
supposedly privacy-respecting way.
And I think we've talked a bit about
digital IDs in the past,
but I just want to reiterate.
This is certainly a better solution than
the current setup that a lot of websites
are doing where you have to scan your
face and you have to scan pictures of
your ID because that is
a privacy nightmare.
It's also a security nightmare.
We've already seen, I think,
multiple data breaches of all of these age
verification and ID databases being
leaked.
And now all of this public information is
out there.
That is a huge security problem.
It's an economic problem because there
will be identity theft,
like the government is enabling
extremely scary stuff by promoting these
technologies.
And in the US here,
I know that the government uses vendors
like real.me or all these other identity
verification companies.
ID.me?
Yes, thank you.
Something like that.
I'm confusing it with real ID,
which is separate.
But yeah,
they use these for official government
things instead of making their own ID and
login system.
And like,
that is extremely concerning from a
security perspective.
But,
The overall point that I want to make
with this is that it's not only about
the privacy of the individual transaction
being made here.
This is also a censorship issue because to
get this ID in the first place,
you need to give away a lot of
your information.
So that's a privacy issue right there.
Maybe in the EU,
a lot of people already have national ID
cards.
You might be used to it.
Here in the US,
that isn't necessarily commonplace.
I know that...
The current administration is really
pushing for it to be,
and they're really supporting everyone
getting a password and having digital IDs
on their phone,
which is a whole separate thing.
But the issue being created is that as
these governments try to age-gate as many
services and as many sites as possible,
as they can possibly justify –
um it really creates a wall around all
of these things that the government has
absolute control over whether you can
cross that wall and access that site and
they can do something like revoke your id
if they want to for whatever reason um
kick you off of practically half the
internet right um and we're seeing these
id verification
laws and directives expand far beyond
their original intent of like protecting
adult services.
Now we're talking about social media
sites.
Now we're talking about VPNs.
We've seen them affect in the UK,
potentially Wikipedia, for example,
which is just a knowledge sharing service.
That's something that should be
people should have a right to access that
frankly and it's crazy that the government
would step in and get in the middle
of that and that is what we're enabling
with these digital id concepts it is a
whole system that the government has sole
control over and it is really in my
opinion antithetical to
the internet and what computers and the
internet were made for.
We just cannot accept these restrictions
on the free flow of information sharing
and knowledge.
And so even with these like private and
zero knowledge digital ID solutions,
it creates a real danger to society that
I don't think we should tolerate using
this technology at all for gating access
to information especially.
Yeah, totally agree.
Great point.
Information longs to be free.
And there are many who argue that
information should, I mean,
I think that's why, um,
I don't know if this is the ethos
for privacy guides, but at the new oil,
like I've never charged for articles for
blog posts.
Like I'll do early access,
but then like a week later it goes
public, you know, but I,
there's no part of my website that is,
Oh,
you got to join a membership to access
this premium stuff.
It's like, no,
cause it's information and it should be
free.
And yeah,
to put up those barriers to information is
really scary for the potential.
Absolutely.
Going back to something you said a little
while ago now about the government just
constantly putting band-aids on their
current bad solutions,
we have this story here from Independent.
AI and facial recognition to be rolled out
as Britain's broken policing system faces
sweeping reforms.
Officials say using AI will free up six
million hours of police time,
the equivalent of three thousand officers
each year.
This article says the Home Secretary has
announced plans to ramp up the use of
AI in live facial recognition as she
unveils sweeping reforms to fix Britain's
broken policing system.
Shobana Mahmood,
sorry if I pronounced that wrong,
is investing a hundred forty million
pounds to roll out technology which she
hopes will free up six million police
hours each year,
the equivalent of three thousand officers,
as part of the biggest overhaul of a
quote,
outdated policing model designed for
another century.
AI technology will be deployed to rapidly
analyze CCTV, doorbell,
and mobile phone footage,
detect deepfakes,
carry out digital forensics,
and speed up administration such as form
filling, redaction, and transcription.
These measures are part of a bigger
overhaul to policing that it seems like
England is seeing right now.
But I think I saw somewhere in this
article.
Now I can't find it.
Well,
I think the overall point is that these
AI tools are well known already to be
quite unreliable, right?
We're going to see a lot of
Like when we've seen this ruled out in
other law enforcement jurisdictions,
and especially like even here in the US,
for example, recently,
we talked a lot about this last week.
These AI tools being ruled out,
they're not reliable.
They're making mistakes and people are...
taking the claims of these systems at face
value.
And I think it's a really dangerous
situation that the UK is putting
themselves in by enabling this technology.
So definitely something to be wary about,
I think.
When you were reading this article,
did you see any other points you wanted
to point out here?
I think just to back up what you're
saying, yeah,
I have a friend here in the US
who works in law enforcement.
He's not a cop,
but he's like a civilian employee.
And he sends me stories,
I swear to God,
a couple times a month where he's like,
oh, so one of our cops used AI.
And I'm pretty sure this is the official
sanctioned system they're allowed to use.
Like,
I don't even think this is somebody being
like, quote unquote,
lazy and going outside the system.
He's like,
yeah,
so this cop tried to use AI to
like do his police report and it just
got everything completely wrong.
Like you said,
it was a two in the morning and
just all these little things that like,
you know, don't sound that bad to us,
but it's like, yeah,
that means this case gets thrown out in
court because the prosecutor will
absolutely tear this apart.
And just, yeah,
they're completely unreliable.
And he sends me these stories all the
time.
And I'm assuming these are just the really
bad ones he sends me that are like,
wow, they got this really wrong.
But yeah, AI is,
so so bad yeah i found this article
i was looking for it was just one
sentence but um they're creating this
national center dedicated to using the new
technology called police ai despite just
recently in ai hallucination influencing
um a decision by one of their police
departments to ban um fans of uh israeli
uh
football team from a match in Birmingham
last year.
So they're already experienced with the
problems that this can cause and the
problems that you see when you really just
take these at face value.
People aren't giving this AI oversight and
it causes real problems.
And I cannot imagine that they've really
learned from these mistakes.
I think that this sort of thing
as we've seen, it's,
it's only going to become more frequent
and more of a problem.
I think that that is the biggest problem
with this, that I would,
that I would point out for sure.
It's crazy.
And it's such high stakes too.
It's one thing when like, you know,
I'm cause I've,
I've admitted to this before.
I'll use like Braves Leo.
If I'll go to the search engine first
and I'll be like, you know,
I'll type in the keywords or whatever I
think should pop up the thing I'm looking
for, but then I'll get like, Oh God,
actually, what was it?
Um,
I think it was with WhatsApp.
Yeah,
it was this whole WhatsApp thing that our
headline story, actually,
as I'm working on this script for private
messaging,
I was looking for a story about how
WhatsApp tried to change the terms of
service so that they could share data with
other meta properties like Instagram for
targeted advertising.
And everybody got really mad.
And so I went to Brave and I
typed in like, you know,
WhatsApp data sharing, whatever, whatever.
And all I got was this week's headline
story.
And I'm just like, oh my God, okay,
forget this.
And so I went to Leo and I
was like, hey, I'm looking for this story,
blah, blah, blah.
And it was like, oh,
you're thinking of this from twenty twenty
one or whatever.
And so, yeah,
but I've had these times where, like,
I ask Leo a question and it works.
And then ten minutes later,
I have the same problem.
So I ask it another question.
But for some reason,
it loops back into the original question
and just literally word for word answers
the first question.
And I'm like, no, that's.
all right,
let me close this window and start a
new one.
And just the point being that like,
it's amazing that they see that kind of
behavior and they're like, yeah,
this will be great for determining whether
people go to jail, have a criminal record,
possibly end up on death row.
I don't think they have death row in
the UK, but you know,
just like we can completely ruin
somebody's life.
And we know that this thing is not
perfect, but we're willing to do that.
That's wow.
That's insane.
Yeah.
No, the,
the other thing in answer to your question
that jumped out at me was the,
the facial recognition bands.
These are,
Yikes.
These have been covered extensively by
groups like...
I think they're called Big Brother Watch
in the UK.
And the police will just randomly take...
They have these mobile...
I don't even know what you want to
call them.
They're mobile facial recognition bands.
They'll go out to a public street out
of the blue and they'll set them up
and just scan everybody that walks by.
And the reason they're so problematic is
because they'll put signs up at the end
of the street that say, Hey,
we're using facial recognition because
legally they have to,
they have to put those signs up so
that you can quote unquote consent.
And the reason I put quote unquote is
because there've been so many stories of
people will like turn down the street and
see that sign and
And decide like, oh,
I don't want to go down the street.
So they'll turn and walk away.
And the police will go follow that person
and hunt them down and be like,
why'd you walk away?
What do you have to hide?
What's your name?
Show me your ID.
And sometimes they'll even facial
recognition them anyways.
And it's like, dude,
it's not consent if you're going to chase
me down the street and make me do
it anyways.
And so, yeah,
I think they're going from like ten of
those to like fifty of them.
Five zero.
It's completely insane.
And those things scare the crap out of
me.
Yeah, my...
My wife made a friend in the UK
last year and she was like,
we should go sometime.
And I was like, never.
No, I'm not going to the UK.
It is a bit of a scary place.
I just saw we got a comment from
a Wither lead here who said that these
cases are hilarious that the involved
people using them are too lazy to look
back at what really AI is generated for
them.
And I think that's very true.
And it's ridiculous,
but I think it really highlights a huge
problem that we see with AI right now,
which is that I don't think people...
When,
when we see AI used in these
circumstances,
it always needs to be done under like
the oversight of a real person with
experience and knowledge in this space,
because AI will lie to you straight to
your face without batting an eye because
it can't, it doesn't know any better.
And you know,
if you're going to use AI at all,
the only way to do it is to,
um,
be aware of that and be able to
catch AI and maybe
maybe i don't know in terms of police
but maybe some police officers can can do
that right now um like more experienced
ones they might be able to look at
this and say like oh that's not quite
right but what we're missing right now i
think is all these younger people the new
generations entering the workforce or in
college right now who are really reliant
on ai they're going to be using ai
more in their jobs and they aren't being
trained on how to
properly oversee AI.
I think we're losing a lot of that
knowledge and we're putting a lot of trust
in AI and that is simply not a
tenable
solution to this to this problem um we're
not properly training anybody who's using
these tools right now in my opinion to
be aware of this in a way that
makes sense we're kind of offloading a lot
of jobs to ai right now when that
is not something that ai can do it's
never really going to be something that ai
can do there are certainly like ai
optimists who can
argue, and they may be right,
that AI will be a big part of
people's jobs in the future,
but it'll always be under this human
supervision.
And it'll always be like a force
multiplier, basically,
but you have to
You have to have the ability to recognize
the problems with AI and control it.
And people simply don't right now.
I think that's a huge problem.
I think it's only going to become more
and more of a problem as more people
with real world experience retire and they
don't pass that knowledge down to new
trainees who are just doing everything
through AI.
So that worries me quite a bit about
AI, not just in policing,
but in pretty much any field where they're
trying to apply it right now.
And I think we really need to be
aware of that and we need to do
more about that to make it better.
Yeah,
and that kind of goes back to what
I said earlier about we have a low
level of tech literacy.
We have people who...
You know,
I just mentioned the issues that I have
with Braves Leo,
which somebody else said in the comments,
like Leo's pretty good.
And I agree.
I'm very happy with the results.
It cites its sources.
So I always double check it.
And I'm like, okay,
let me make sure this actually says what
you're telling me it says.
But even then it's still, you know,
it gets things wrong.
It repeats itself.
It does things.
And I don't understand how like,
like the whole AI girlfriend thing,
you know,
like some people are really thinking like,
oh, this,
and I'm sure they realize it's software,
but they're like,
this software is sentience and really
cares about me.
And it's like,
I have to imagine it has the same
glitches that Leo does.
And I don't understand how you can look
at that and still think that this is
the way to go.
And just that level of tech literacy to
not understand what's going on under the
hood and that it's just a prompt and
that it's just, you know, autocorrect.
And it's just,
It scares me that, yeah, like you said,
this is becoming such a – it's something
that people are relying on for such
important decisions.
And on top of it, like you said,
losing the ability to understand what it
is, what it does, the limitations,
things like that.
It's a tech literacy problem,
but it's also,
I think –
like an intentional deception issue.
And this almost ties back to what we
were talking about social media earlier,
where the way that AI companies,
in particular OpenAI, I think,
are treating their customers and are
designing these models is
becoming a legitimate public health hazard
more than anything.
And that's the sort of thing where, again,
we probably want to see more safeguards
and more regulation and more thought put
into how people interact with these
things.
Because I think people,
there are so many people out there,
I think,
who naturally just want to humanize and
anthropomorphize any technology they want.
They're going to be sucked into these
relationships like you were talking about,
for example,
or the advice that you're giving because
it can sound so human.
And I think that playing on that fact
to sell more subscriptions or to get more
users I think is really, really dangerous.
I think it is pretty much all of
the problems that we've seen with
algorithms and social media,
but like
ramped up to eleven,
it's bad stuff that I think really needs
to be thought of more carefully.
We're in a classic Silicon Valley move
fast and break things moment,
but the things that we're breaking right
now are extremely serious,
and that's not maybe the mentality we can
take when we're rolling out this sort of
technology nationwide or globally or
whatever.
It's crazy stuff.
Yeah, for sure.
All right.
I think in a little bit here,
we're going to talk about some of the
popular discussions on the forum and start
answering questions.
But first,
we're going to talk about TikTok on the
topic of public health crises and AI.
Yeah, so TikTok,
in case you guys haven't been paying
attention, which for the record,
I wouldn't blame you.
I don't really pay much attention to it
myself, if we're being honest.
But TikTok was sold.
Believe it or not,
the deal finally went through.
I know Trump's been trying to get that
pushed through for a couple of years now.
And finally,
I think the twenty second last week,
it like officially the deal closed.
It's all like the handoff has started.
And I know the handoff has started because
I overheard my wife and one of her
friends saying that that TikTok was
basically broken for like a week.
Well,
I think we've had a problem ourselves with
our own shorts, right?
We can't post.
We haven't been able to post.
Oh, that's right.
I forgot about that.
Yeah.
In case you guys are on TikTok,
our shorts stopped posting there because
of the technical issues they were having,
and it couldn't post for some reason.
And I actually went to go try and
look for the live and see if there
were any comments,
and I can't even see the live,
but that could be my phone.
So, yeah.
Who knows?
But...
Yeah, so anyways, TikTok sold.
And of course, when you open the app,
you have to accept the terms of service,
which I'm assuming you have to do even
if you want to delete it now,
which is a dark pattern that's not cool.
But anyways,
where we're going with this is there were
some privacy changes to TikTok.
Believe it or not, it got worse.
If you are one of the people who
didn't think it could, it did.
So this article from Wired here talks
about three of the biggest changes.
One of them is that TikTok is now
capable of precise location tracking.
Before this,
it did not collect precise data.
Yeah, precise location.
So now if you are a TikTok user
for whatever reason, like, you know,
we post stuff there,
make sure you double check and disable
that.
the precise location.
I mean, disabled location in general,
but especially precise location.
It now tracks your AI interactions,
which TikTok is loaded with AI slop,
but I guess there's also like AI tools
and I don't understand what those are
because I don't use them.
Again, like I show up,
I post a video, I check for comments,
I leave, I don't hang out there.
So I guess there are AI tools now
that formerly did not fall under the
privacy policy,
but now TikTok has started tracking
analytics and metadata from the usage of
those tools.
And if you're a video viewer,
you can see here,
it says the old privacy policy are not
explicitly mentioned,
the new privacy policy.
So that is one cool thing about this
article.
It shows you what the old privacy policy
says and what the new one says.
And then next up, not quite last,
because there's one more,
a couple more things we're going to talk
about.
But next up,
TikTok has expanded its ad network.
So
So previously, I want to say,
let me double check here.
So rather than using, well,
you use the app TikTok.
Yeah.
So now basically they're going to be able
to advertise to you in other places and
use the data from TikTok to advertise to
you in other places.
And I would assume collect that data from
other places to advertise to you on TikTok
because I know that TikTok does have its
own analytics tool like the Metapixel,
Google Analytics.
So yeah, that advertising has expanded.
Another privacy concern we should mention
that I have seen making the rounds.
Let me go ahead and change my tab
I'm sharing here.
This one comes from TechCrunch and it
says,
TikTok users freak out over apps
immigration status collection.
Here's what it means.
I don't like this article.
I'm gonna say that upfront.
Because basically there's – I guess
there's – again,
don't hang out there so I wouldn't know.
But I guess there's a lot of videos
going around TikTok about how TikTok now
is collecting your immigration status,
which is probably already being reported
to ICE.
I feel like I've reported on a story
about that before, but I could be wrong.
But anyways, according to this article,
TikTok has always done that.
The difference is that now with the
updated privacy policy,
because of the way that laws in California
are worded,
specifically with the CCPA and
California's Privacy Act,
now they have to specifically disclose it.
And it's – let me see if I
can find it here.
It's a very subtle, like, basically –
Yeah,
the policy specificity around types of
sensitive information has to do with state
privacy laws such as California's CPRA.
The latter, for instance, the CCPA,
requires businesses to inform consumers
when they collect sensitive information,
which the law defines as including the
following things.
And there's a...
Huge list of things here,
precise location, genetic data,
things that I think we would all agree
are sensitive information.
And it says, of note,
citizenship and immigration status were
specifically added to the category in
twenty twenty three.
Um,
so basically it was probably always
collecting this data.
It just didn't have to tell you that
before.
And the reason I don't like this article
is just the author's tone.
She takes this very like, guys, calm down.
They were always doing this.
It's not a big deal.
Now they're just being more honest about
it.
And I really don't like that tone because
it's like, no, it was bad then too.
It's still bad.
It was bad.
This is not a calm down moment just
because we know about it now.
So yeah, that, uh,
What's up?
Sorry,
my camera is apparently not working.
Maybe I have to fix this.
It's all good.
Well,
the only thing I was going to say
is I read this article and I was
thinking the exact same thing.
Like this TechCrunch article,
they really framed this as like, hey,
you know, it's actually not a big deal.
They have to put this in their privacy
policy because they're collecting it and
it's the law.
But that's not an excuse for them to
collect it in the first place, obviously.
I see we saw a question in here,
how did they determine immigration status?
I think when it comes to this and
also the other sensitive information that
was mentioned in this article,
like sexual life or sexual orientation,
I think that that stuff is kind of
being determined by algorithms,
most likely.
And it's probably a situation that we
see...
Similar to that stuff showing up in the
privacy policy of cars and vehicles,
for example, when we saw Mozilla's things.
A bit of it, I think,
is going to be overzealousness.
I think a lot of lawyers think we
should put everything in there just to...
cover our butts just in case something
happens.
But also I think they are collecting this
information and they're inferring it based
on not only the content you post,
but also the content that you consume.
And I think that they can probably get
a pretty good idea of all of this
information just based on the content you
consume alone.
And so
Yeah, ideally,
they wouldn't be collecting any of that
information at all.
I definitely don't think that just because
it's in state privacy laws,
that's an excuse to put it in there.
Ideally,
the algorithm wouldn't be able to know
that information.
And once again,
I think that's the theme of this episode.
That's the sort of thing where the social
media algorithms are overreaching and are
very dangerous and need to be reined in
a bit.
Yeah, for sure.
Yeah,
it would be nice if they just said,
here's how we determine that information.
But it's probably so many different ways.
Because like you said,
some people disclose it.
Some people upload a video where they're
like, hey,
I'm an immigrant here and I moved here
in
you know, but other people, yeah,
it's probably a lot of signals.
Like I would have to imagine if I
moved out of the U S it would
probably still be pretty easy to tell
based on the way I spell things,
the language.
I mean, there have been studies into like,
you know,
one of the most common examples is like
soda versus pop, right.
Depending on which phrase you use or Coke
or some specific things,
like depending on which word you use,
it's a pretty good indicator.
Like, Oh,
you're probably from up North or you're
probably from down South or something.
So
That's just when you add up enough of
those little signals,
you can start to reveal things that may
not be a hundred percent accurate,
but they're probably right more often than
they're wrong.
Exactly.
Like every single one of those pieces of
data, it's like a Venn diagram for like,
you just keep adding more circles.
And at the end of the day,
there's only going to be one person in
the middle of all of those circles, right?
You can get very specific with very broad
data.
Dude,
that is a really good way to put
it.
I like that.
That was good.
The next story in here I think answers
Jordan's question in the chat.
Does this affect the U.S.
only or the whole world?
Do you have that story pulled up on
your screen here?
Let me see.
I do, yeah.
So this is just kind of rounding off
our trio of TikTok stories.
So because TikTok – and I'll be honest.
I don't even know the full answer to
this story myself.
I am very unclear.
Did –
all of TikTok just get sold to a
bunch of US and one UAE investment
companies or did only part of it?
It is only the American one.
Only American TikTok is sold to this.
The worldwide TikTok continues to be owned
by ByteDance.
But how this affects TikTok, I think,
is still a good question and it's unclear.
And I think that that is the point
of this story here.
It's like Canada is now looking into this.
I think especially because I would imagine
just proximity to the US could get like
some Canadian users lumped into this
American version of the platform because
of, I don't know,
geolocation settings of their phone or
whatever.
I don't know how exactly this split works.
The whole TikTok and especially like
America,
the American TikTok being its own thing
doesn't make a lot of sense to me
because it's unclear whether it federates
with like the global TikTok.
Do you see the same thing?
content?
Is it just a different algorithm?
Can people outside the US see American
TikToks?
I unfortunately don't know enough about
TikTok.
But anyways, going back to this story,
and you can share more about it.
I think that that is the question that
Canada is asking right now.
I think it's unclear to everyone.
Yeah,
you asked a whole bunch of questions that
are scary.
Yeah,
and that's the headline for audio
listeners.
It says,
Canada's privacy czar seeks answers on
TikTok policy updates.
I don't know when we started calling
everybody a czar.
I don't know when that took off,
and I don't like it,
to be totally honest with you.
But yeah, it's Canada's, oh my God,
what are they?
The Office of Privacy,
the Office of the Privacy Commissioner of
Canada, the OPC.
And yeah, like you said,
I know this is way more common in
Europe, but even here in the U.S.,
I don't know about nowadays,
but historically, we've had areas,
especially in the south,
where people will be right on the border
of Mexico and some state,
and people will come back and forth.
Maybe they live in Mexico,
but they work in the US or vice
versa.
I don't know how that works,
but I do know it's a thing,
and I'm sure it was probably a thing
in Canada.
I've known a few Canadians who,
growing up –
Went to school or maybe not went to
school,
but like went to church in Seattle and
maybe not Seattle.
That was probably pretty far down for
them.
But, you know,
like they were back and forth pretty
regularly and they were almost like dual
citizens because they're just so close to
the border that they have a lot of
friends and connections in the other
country.
And so, yeah,
Canada is rightfully so trying to
understand with all this.
this sale going through now,
what does that mean for Canadians?
Will they get looped into this stuff?
Will their privacy rights still be
respected if they get looped in?
Is TikTok going to make any effort to
separate Canadian users and American
users?
Yeah,
so we don't really have much on this
story because this is just kind of the
initial announcement that, hey,
we're asking these questions.
But I think they are very good questions.
And like Jonah said,
there's so many questions right now.
We're trying to figure out how any of
this is going to work.
What are people going to see?
I know Trump said that he wanted the
algorithm to be retrained once America
bought it.
So yeah,
there's a lot of things that are kind
of up in the air right now.
I also, real quick,
I appreciate the people in the comments
when I asked about sodas.
And somebody said, we call it soft drink.
And someone else said,
we call it by its chemical compounds.
So thank you, guys.
But I believe Jonah is trying to fix
his camera right now.
And in a minute,
we will start taking some viewer
questions.
And Jonah will return to us very shortly.
Yes.
Oh, he's back.
He's back.
All right.
Did you have anything you wanted to add
to the TikTok story or are you ready
to move on to forum updates?
No,
I think I could point out this comment.
Jordan just mentioned this really quick.
I don't think we talked about it too
much,
but I definitely have seen a lot of
stories about how the algorithm is
changing.
There's definitely been allegations of the
American version of TikTok now censoring
posts that are critical of the American
government and that sort of thing.
Yeah, very concerning for Americans.
I don't think that it was the right
move to sell TikTok to Larry Ellison,
of all people,
and to Saudi Arabian private equity
companies and all that stuff.
To be fair, they are Emirati,
not Saudi Arabian.
Oh, sorry.
I did a little bit of digging.
It turns out we're actually really good
allies with the United Arab Emirates.
so is china so you know it's like
i don't like you but i like your
best friend which i don't know if that
makes you really mature or i don't know
i i'm just asking questions yeah
So yeah, as you were saying,
we're going to get into questions that we
see on our forum.
We'll also get into questions that we've
seen in the chat.
I know there's some questions.
We've answered some questions as they've
come up,
but I've seen some questions that we've
moved on and we'll get back to those.
So stay tuned for that.
But yeah, in the meantime,
let's talk about a couple top posts that
we've seen in our community and on our
forum.
I think the...
Big one for this week is, of course,
it's Data Privacy Week,
which is always an exciting time for all
of us in privacy.
Oh yeah, you have it pulled up here.
We had a Data Privacy Day post,
but basically on Wednesday the
twenty-eighth,
it was International Data Privacy Day.
Kind of just
A yearly event that a lot of organizations
in the privacy space,
both on the business and consumer side,
really try to focus on personal privacy
improvements and switching to private
alternatives and all of that sort of
stuff.
And so we've been posting some things
throughout the week on our social media
channels about
Data Privacy Day and Data Privacy Week,
ways that people can get into switching to
more private alternatives.
And we talked a bit about on our
forum, I think,
as Nate looks through that,
about how people are preparing for for
twenty twenty six and how and.
Yeah,
what people are doing to be more private,
which is super cool.
So I don't know if there's any specific
posts you wanted to highlight,
but
Well, I do love that hail privacy one.
That cracks me up.
But yeah, I mean, no, there's, I mean,
it runs the gamut here, right?
Like,
let me scroll back to the top here.
You know,
one person said one goal for twenty twenty
six is to fully move over to ProtonMail,
which, you know, whether it's Proton,
Tudor, something else.
What is it?
Mailbox?
Is that the other one we recommend?
Um, whether it's one of those services,
whichever one it's, it's, you know,
it's no small feat to move email.
And fortunately that is something you can
do yourself.
You know, it's not like a signal,
which thankfully signal is getting really
common, but it's still,
you have to have other people to talk
to you.
Right.
We talked about that earlier with the
WhatsApp story.
Email, you can move that by yourself.
Nobody's stopping you,
but it is still a lot of work.
And actually, uh,
many years ago I moved from Yahoo to
Gmail and I spent years still getting like
this account that I forgot about that's
went to Yahoo instead of Gmail.
So, you know, it's, it's, um,
it's a lot of work and, uh,
Yeah.
One person gave the advice about the
rabbit hole is very deep and it's
understandable temptation to give up,
but don't start with the low hanging fruit
and work your way up the privacy tree
one step at a time.
So, um, yeah,
they talked about smart TVs and they're
trying to replace it with something that's
a little bit more privacy friendly.
Um,
Yeah, they talked about, let's see,
just kind of harm reduction.
I know that's a big thing for me
is they said that their partner has
certain disabilities.
So unfortunately,
they can't really get away from like a
normal phone and stuff.
But they said they're researching a way to
run some old Linux computers and get the
same television programming with more
privacy.
And so, yeah, pretty cool stuff.
We can get into some viewer questions.
I think it's about that time.
First one I saw in the chat,
this was for you, Nate.
What ThinkPad are you using right now and
why?
I am using a ThinkPad X-TX because it
was a gift.
And it was free and I run cubes
on it.
So I'm actually reading the chat from a
cubes computer from,
I have a little VM just for my
work in privacy guides.
And yeah, I mean, I like it.
It's a little bit slow.
I think the processor is,
I think this computer is from like, so,
you know,
this processor struggles a little bit,
but yeah, but you know, it's not bad.
And it's definitely like, I couldn't,
I can never do any kind of video
editing or gaming.
And also, the screen's a little bit small.
But it's great when I travel.
I went to Europe late last year.
And last week,
you guys saw me with the,
what did I call it earlier?
I called my other computer something.
It's like a billboard or something.
I don't know.
But yeah,
my other computer is massive and covers my
whole face.
And this guy's like fourteen inches.
So it sat right in front of me
on the plane.
Nice and and nice and neat.
And that was really, really handy.
And it's a little slow,
but it runs everything just fine.
And it's great for surfing the web and
writing.
And so, yeah, I mean,
I'm going to use it until the day
it stops booting or something.
So, yeah.
Let's see.
I'm looking through here.
We didn't seem to get any chats on
the forum, which is unfortunate.
There's a couple more chats in here.
And yeah,
if anyone's watching and has questions,
this is a good time to leave them
in the chat.
Got another one here from Dread Pirate
Roberts.
Do you guys think that all of these
bad laws like chat control,
ID verification,
facial recognition are done in bad faith
to gain more control over the population
or just bumbling politicians making these
mistakes?
That one I think I did answer a
bit, I guess,
when I pulled up our recent tweet about
that.
I do think that the direction that a
lot of Western countries are going in
right now is towards more authoritarian
practices and towards more control over
their own citizens,
which I think is really unfortunate.
I think that that is a driving factor
behind a lot of them.
So, yeah, I don't think it's great.
Did you have any additional thoughts on
that, Nate?
I think in answer to the actual question,
I think it's both because, you know,
something that one of the podcasts I
listen to,
something the host says a lot is
everybody's the hero of their own story,
right?
Like, nobody...
nobody thinks they're the bad guy.
Even,
even when they are doing objectively evil
things in their mind, it's like, well,
this is a means to an end, right?
Like this is going to make the world
a better place in the long run.
And I have to literally kill people to
do it, but you know, that's their logic.
And I think there are a lot of
politicians who want to protect children
and just don't understand that this is not
the best way to do it.
You know,
whether that's technical misunderstanding
or whatever.
And don't get me wrong.
There's definitely a lot of politicians
that are also just like, hey, man,
whatever lines my pocket,
whatever makes me more powerful,
more prestigious, whatever.
I don't want to let those guys off
the hook.
But yeah, I mean,
even the people who are genuinely doing
this because they're like, oh,
this will make me more powerful.
I think in their head, they're like,
this will make me more powerful and I
can make the world a better place by
my definition,
which unfortunately means a lot of other
people tend to suffer along the way.
So, yeah.
Yeah.
One I did see here that I wanted
to kind of touch on a little bit.
I think this was right before the one
you shared.
Computer's going slow here.
Um, this captain haddock said, uh,
surely mass level awareness and
educational privacy is necessary.
Majority public simply don't have the
capacity to understand how privacy works
in a technical sense.
I disagree.
Um, I mean, I don't,
I don't want to get too pedantic here.
I mean, people are smart, right?
Well,
to borrow the line from men in black,
a person is smart.
I will agree with that.
But, um, you know, I don't,
I don't think anybody is incapable of
learning this stuff,
but I do agree that I,
I think most people don't want to learn
this stuff and it's very, um,
Some of this stuff is really hard to
wrap your head around,
even for those of us who are interested
in it and really passionate about it.
So yeah, I mean,
I just wanted to point that out.
I think when we discredit people,
that's not helpful personally.
But yeah, I mean, people can learn.
It's hard stuff to learn.
What else?
We talked about regulation a little bit.
we did talk about the slippery slope i
know there's one user here who mentioned
uh you know i see this as a
stepping stone towards banning and
restricted more of the internet i agree
that kind of goes back to what i
said about some nobody thinks they're the
bad guy but sorry i see you were
trying to pull one up there
I was trying to pull up this question
about VPN bans.
I realized they're probably not asking us,
but other people in the chat,
because that is our own tweet that blew
up about VPN bans.
But yeah,
it is interesting which of our posts
become popular and which ones...
not so much seems kind of random to
me at times unfortunately but you know
that's uh that's the problem with social
media and these algorithms they're
unpredictable and they're not really i
don't think a lot of the times they
get um our message out in front of
people who are interested in reading it
but sometimes it works out so social media
Yeah.
Here's one from culpable six, seven, five,
zero.
And they said,
do you think privacy has been getting
harder and harder to achieve over the past
couple of years,
as well as getting more inconvenient?
For example,
I keep trying to use Movad browser,
but there's no dark mode on most websites
and it hurts my eyes,
which is minor for me,
but it impacts people.
I gotta be honest.
I think it's both.
I think on the one hand we have
a proliferation of
of user-friendly tools like Signal,
like the Brave browser, like ProtonMail.
And I realize that in a lot of
cases,
these tools still have shortcomings.
Like I think...
I'm a Tudor user,
but I will objectively admit that I think
Proton is the better user experience.
I hate saying that.
So where I'm going with that is I
think we can all admit that a lot
of these tools may still leave some things
to be desired.
Oh, right, where I was going with that.
And even Proton is still missing things
compared to Gmail or Google or Apple or
Linux users.
Anyways, but on the other hand, you know,
there's also like you mentioned you want
to use Mulvad browser,
which I think is perfectly legit.
Mulvad browser is great.
I have Mulvad.
Mulvad is fantastic.
And so on the one hand,
it could be like, well, use Brave.
Brave has dark mode,
but maybe Mulvad has things like maybe you
agree with their definition.
Their privacy method of trying to make
everybody look the same like the whole Tor
browser thing does.
Maybe you just don't like Brave as a
company,
which is a totally valid take as well,
in my opinion.
It sucks that we don't have more really
good options.
When you're in mainstream technology,
you have so many options that you can
almost pick any of them in the work.
And it sucks that we don't have that
same freedom of choice with privacy
issues.
um,
that we would as with the mainstream
stuff.
But also the other thing is it's,
I think when we're going up against the
more high level threats,
I definitely worry that our privacy is not
as easily achieved there.
Like,
I think it's really easy to opt out
of the, um, the, uh, targeted advertising,
mass surveillance, automated stuff.
It's when you get up into the more,
you know, like, um,
Oh, what was that company called?
Augury, I think.
This was like five years ago.
There was a company that was basically AI
correlating traffic
Years ago,
this is probably actually why Mulvad
rolled out data,
that was their whole selling point was
they would sell to the DOD and the
military and law enforcement,
like federal law enforcement.
And they're like, yeah,
we can even unmask people that are using
VPNs.
We can correlate the traffic and figure
out where everybody's going.
And pretty much your only defense was like
a multi-hop VPN or Tor.
And I think when you're talking about
data,
That level of cutting edge,
I think it's getting more competitive and
more difficult.
But also, that's probably, to be fair,
just the cat and mouse of it.
They invent that, Mulvan invents data.
And then they invent something else,
and Mulvan invents something else.
So I don't know.
I try to focus on what's within our
ability to control and defend against.
And I try to be grateful that we
do have so many good options,
even if they're not perfect.
Yeah.
Really quick, not a question,
but somebody said ByteDance is still
keeping around twenty percent of the
U.S.-based TikTok, but most of its U.S.
ops are sold.
Yeah.
I don't know.
It's the whole thing is like clearly not
about getting ByteDance out of TikTok
either.
Like it's just a pure political thing
going on.
So I'm pretty sure that is true.
I think I have I have heard that.
And like the fact that it's all working
together and they're maintaining this this
ownership,
but they're also just like
partially being taken over by all these US
companies.
It makes no sense.
And I don't know.
The TikTok thing is crazy because I don't
know how much you got into this in
your interview with Taylor Lorenz,
but I know she's been talking about this
lately.
I know other people have pointed it out
on social media.
The whole TikTok thing was started
and really pushed for by the Democrats
during Biden's administration.
And I think a lot of people like
people in our position at the time were
saying like, oh, if we let this happen,
if we let the Democrats
do this and push this forward this is
going to obviously be misused by some
government in the future and then lo and
behold you know a few years later that
is exactly what is happening right we I
don't know it's that could be a whole
political discussion but yeah it we're
really I think that the American the state
of America right now is just concerning
because um
A lot of people are working towards like
all of the problems that we're seeing now.
It's not just like the current
administration right now decided to do
this, right?
This was a long time in the making.
It was a bipartisan effort to take over
TikTok.
And now we're seeing the results of that.
And I think that that's really
unfortunate.
Exactly like Jordan,
one of our producers just said,
seems like they wanted to control the
algorithm.
Yeah,
that was pretty much the only goal with
all this TikTok stuff, which...
I don't know.
It shouldn't be in control of any of
these governments,
you can definitely argue.
It wasn't great in China's hands either,
but we haven't improved the situation for
sure.
And this is, yeah,
just to agree with you, this is why...
I love the analogy of Kerry Parker from
Firewall's Don't Stop Dragons.
He refers to personal data as like
radioactive waste.
And he's like,
you want as little of it as possible
because you can't handle it safely.
And the stakes are too high if something
goes wrong.
And it drives me insane that America has
such this attitude of like, well,
it's okay if
Facebook collects all this data.
Nevermind that there's literally a
Wikipedia page full of their data breaches
and privacy scandals.
But, you know, it's like, oh,
it's okay when we do it,
but when China does it, it's bad.
And it's like, or...
We could just outlaw this entirely and
it'll stop being a problem.
I know this is really not the best
example,
but just because it's morally not okay.
But I remember after Epstein's death,
somebody did an investigation.
Some reporter pulled all the location
tracking data for all the cell phones that
went in and out of his private island.
And every single one of them that went
back to Europe,
as soon as they hit European airspace,
the tracking data disappeared because of
GDPR.
And it's like, okay, yes,
not a great example because it's not great
that bad people got away with bad things,
but that proves that GDPR works.
And it's like, why can't we do that?
Why can't we just get rid of the
data?
And then China can't use it either.
Nobody can use it because it's not there.
And for the record, yes,
I know there will always be espionage and
people who flout the rules,
but it'll drop so significantly.
And it would be at very least a
huge step towards fixing the problem,
if not a perfect solution.
And it drives me insane.
Absolutely.
I think I don't want to bring up
this whole the Epstein case and the
morality of that situation.
And like, obviously, you know,
could somebody argue that GDPR is not
really helping in that case?
Maybe.
I don't know.
But yeah.
I've talked about this.
I don't remember in a post or another
video a while ago.
Basically, I think in the privacy space,
in the security space,
something we have to keep in mind is
that we see a lot of stories in
the news like that one that you just
talked about, for example,
where we're talking about criminal
activity and how either, you know,
they had an OPSEC failure and they were
caught because they weren't private
enough,
or how privacy laws are protecting
criminals.
You see both sides of this, right?
And that is the most abundant form of
coverage about privacy in general.
I think it's talking about how criminals
are impacted because that's the most
probably newsworthy stuff.
But just like how you're talking about how
it proves that GDPR is effective.
Did GDPR maybe hinder this one specific
case?
Yes.
But all of that data that was being
used,
is very commonly behind the scenes and
perfectly legally being used by all of
these big tech companies and all of these
other organizations to do all sorts of
things that aren't catching criminals,
like sell you ads or try and implement
algorithmic pricing or trying to just
influence your opinions in general,
especially on social media.
And GDPR also helps prevent all of those
things.
But you don't hear about those stories in
the news because it's not newsworthy right
now, unfortunately.
You only hear about these criminal cases.
And so I just want to remind people
I think because there is this association
between privacy rights and
internet freedoms and digital rights and
all this stuff and criminals.
It's like just because you see it in
the context of like all of these things
being proven in court cases or in criminal
trials or in like law enforcement
investigations,
that doesn't mean it's the only place it's
happening.
It just means that's the only place that
the mainstream news media wants to write
about it in.
But you can look at all of these
cases and you can extrapolate
into like regular everyday life,
how people can improve their privacy.
You can learn from the opposite mistakes
of these criminals,
but also how these laws can impact and
protect you in other situations that
aren't
related to crime right gdpr protecting all
of that data certainly hinders all of
these bad things that i just talked about
happening um even if it's not widely
reported on and so i always just want
to make that reminder because when we talk
about criminal activity a lot i think that
always comes up it's like why are you
just defending criminals and that's not
that's not the case but they just have
the best cases to learn from
Yeah, it's like that whole,
like you were saying,
news by definition is out of the ordinary.
Like we don't talk about how, you know,
the ten thousand people today use graphene
and went to work and went home and
were completely normal law abiding
citizens.
It's you know, it's when it's, oh,
this guy was arrested and he was using
this weird phone that erases itself.
And it's like, OK, cool.
Yeah.
Like, obviously, that's interesting,
but that's not reflective of reality.
Yeah.
Yeah.
Let's get to our last few questions here.
Here's one from DQ.
Sorry, I think I clicked on one.
Sorry, you can do yours first then.
That's fine.
Okay, sorry.
Real quick.
Yeah, Dread Pirate Roberts said,
as more and more services block VPNs,
are there any solutions to be able to
have privacy without being blocked?
Again, VPNs aren't everything,
but I think we will see like,
I know Proton and I think Mulvad and
I think a couple others also do.
They have obfuscation to try and make it
so you can use it and it won't
be blocked.
And I know when India started requiring
VPNs to keep logs,
they did some kind of trickery where they
were able to move servers out of the
country,
but somehow address them in a way where
they looked like they were in the country.
So basically Indian users could still use
proton and be in India,
quote unquote in India,
but proton wasn't in India.
So they didn't have to comply with the
laws.
I don't know.
That's way over my head, but yeah,
I think we'll still see, and you know,
we'll still have things like tour now
until they outlaw that too.
I mean, I,
It's a cat and mouse.
I think we'll have options.
But yeah,
it will definitely get harder and be more
difficult.
Just to reply to the residential IP aspect
of this really quick,
I want to say you're correct that the
residential IP and proxy space is very
shady.
I definitely wouldn't support it even if
it does work because a lot of these
residential IP proxy brokers,
they are basically running criminal
organizations and they are
tricking people into installing malware on
their computers or tricking people into
buying like these cheap Amazon or not
Amazon but Android TV boxes on Amazon and
other marketplaces to connect to their
networks and that's how they get all these
residential IPs right and funding those
operations it puts a lot of like regular
people in danger because like law
enforcement goes after those people all
the time because they're hosting like
basically an exit node for a VPN that's
handling all sorts of crazy traffic right
and it's
Not an ideal situation for anyone
involved,
so it's definitely not something that I
would pursue personally if I were you.
I would avoid the whole residential IP
space because it's pretty much all malware
that's driving that,
and that's not something that should be
really supported, I think.
Real quick on a personal note,
Dread Pirates Roberts said there's a
documentary from Vice that shows the
facial recognition capabilities in China
six years ago,
and the people I've shown it to have
been very receptive.
Please send that my way because I want
to watch that.
That sounds cool.
All right, what's the next one?
I think you had a question lined up.
I think this will be our last question
of the show here,
but this is from DQ.
They asked,
have you come across the OPSEC Bible by
Nihilist?
First of all, just stopping there,
have you heard of this?
Because I actually have not,
unfortunately.
I'm not sure if you're familiar.
I don't think so.
I want to say it sounds familiar,
but I could be making that up.
If I've heard of it,
I've definitely never read it.
Okay,
that's definitely something I will have to
look into.
But continuing your message,
I'd love to hear your thoughts on its
extreme all-or-nothing privacy philosophy,
especially since the guide criticizes the
closed-source recommendations that appear
on privacy guides.
It seems to push a very different approach
from the more mainstream privacy advice
you usually promote.
And just based on that...
That is a pretty common thing that we
see with a lot of privacy guides out
there,
especially ones that are published
anonymously or are catered towards a more
hardcore audience.
It's definitely a different audience than
we're going for.
I think that the biggest thing that we
try to do at Privacy Guides is try
to find all of these tools in different
categories that can really
raise the bar for privacy as a whole.
We can't solve every problem at once.
And I think this ties into a lot
of the things that we were talking about
earlier in the show as far as convincing
people to switch from WhatsApp to Signal,
for example.
People are using
things that are crazy for your privacy and
extremely privacy invasive.
People are using Windows, for example,
which I think is not something people
should be doing in twenty twenty six.
Like that's the state that most people who
aren't who haven't heard of any of this
are are at right now.
And so switching I mean,
even switching from Windows to Mac OS is
not
ideal and it's not like you know if
somebody comes up to me it's like what's
the most private operating system mac os
is far behind what the actual like better
ones are by by a wide margin but
compared to what people are coming from
which is Windows in this case,
it's a huge advantage and people are more
apt to switch to it.
And I think that encouraging people
switching to, in some cases,
some proprietary systems over time is
better than the situation that I think
privacy guides like the OPSEC Bible in
this case probably
I personally think the outcome of a guide
like that,
if I put it in the hands of
a normal person,
is that they will not follow any of
the advice.
Because we see this even in our forum,
but definitely less so recently.
And we've made some changes to improve
this.
But it's a very common complaint, I think,
in the privacy community where people feel
burned out because they...
need to switch like all of these things
all at once and they feel the need
to switch like completely cut off less
private alternatives or things that their
friends are using and they feel socially
isolated and that's not really the goal of
being private like privacy i think um
gives like,
it's a right that you should have,
you should be able to exercise this,
but it's not like you need to be
completely private in all aspects of your
life.
Some people still need to,
everyone still needs to have a social life
and interact with other people and that
sort of thing.
And yeah, at the end of the day,
When we recommend something like one
password, for example,
it's because we've looked at that and
we've decided that compared to what other
people are using,
which is either no password manager at all
or something like LastPass,
which has notoriously a ton of data
breaches and security issues,
solid proprietary tools that respect your
privacy relatively well are acceptable to
us.
And we would rather people switch to that
than not follow the advice at all.
And for people who are looking for more
advanced or more customized
recommendations or for any of that,
I think we have the form which is
going to be able to answer those sorts
of questions for people who have moved
beyond the general advice that we have on
our site and who don't need an approach
that we take for the general population
where we try to balance privacy, security,
and user experience.
And you can really hone in on a
good situation for you
through these discussions.
And I think that that's the value of
the privacy guides community form that
none of these guides are going to be
able to provide.
Because at the end of the day,
all of this tailored advice is going to
be better in general than any of these
guides, to be honest.
So that's my thoughts on that.
Yeah, you kind of said what I'm thinking,
so I'll keep this quick.
But I think in addition to – y'all
are going to get tired of hearing me
say the words harm reduction.
In addition to the harm reduction mindset,
which I'm a huge, huge fan of,
I think there's also the idea that two
things can be real.
I really don't like the idea –
or I don't like the narrative that some
people push where it's like,
if you're not doing privacy my way and
you're not going a hundred percent,
then you're wrong.
Because the fact of the matter is they're
wrong too.
Like the only way to really be private
is to just like throw away your computer,
never get on the internet,
go live in a cabin in the woods,
which I would like to reiterate.
That's not foolproof either because they
did find Ted Kaczynski.
So yeah,
I don't know.
I really reject that whole extreme all or
nothing.
This is the only way to do it.
I think it's really arrogant.
I think it's really disrespectful.
Again, I want to reiterate,
I haven't read this Bible,
so I'm not passing judgment on nihilists.
I'm just in general.
I think when people do that,
it's really...
I don't know, like Jonah was saying,
like people have,
I've seen people look at certain guides
and websites and just be like, yeah,
I'm not doing, and straight up say that,
like, I'm not doing that.
And I would rather people make small steps
that do something, even if nothing.
And I think some people,
not all of them,
but I think some people will take those
small steps and go, oh,
that wasn't so bad.
That was actually kind of fun.
What else can I do?
And they'll go above and beyond.
Like, I don't need to be using cubes.
That is not part of my threat model.
I like it.
I think it's fun.
So,
and I think kind of going back to
what I said about like two things can
be real.
I think it's great that there are these
really hardcore guides
For the people who want to be hardcore
or even like when Michael Basil was doing
his podcast, I would listen all the time.
And I still read his books,
his extreme privacy books,
because I like the thought experiment.
That's what I'm looking for.
I like the thought experiment.
I like the like knowing how deep the
rabbit hole goes and just knowing what the
options are.
Even though ninety percent of the time I
walked away going,
I'm not going to do any of that.
But it's really cool to know that that's
a thing, and it's really interesting,
and it's fun to learn about.
And, you know, some people would do it,
and there were some things that I would
listen to and be like, oh,
I think I might want to try that,
actually.
So I don't think it's a bad thing
that this stuff is out there.
I think it's really cool,
as long as they're not adapting that
attitude of like, well,
this person's wrong.
I mean, unless somebody's actually wrong,
then like, hey, please,
if you think we're wrong,
open a thing on the forum.
Like, let us know.
But...
It's, you know,
it's respecting that there's different
priorities,
there's different threat models,
there's different resources.
Like, you know,
somebody posted in the forum recently
talking about they disagree with our
Android recommendations because not
everybody can,
lives in a country where they can get
a pixel or not everybody can afford one.
And, you know,
that's true of these more extreme privacy
things too.
So, yeah.
Yeah.
Again, haven't read it,
but if he's coming at it from the
perspective of like,
this is how I think you can get
the maximum level of privacy, then great.
I think that's really cool that there are
those guides,
but I don't think that invalidates things
like privacy guides where we say,
this is probably good enough for most
people.
And I think both of those things can
exist.
And DQ, thanks for sharing in the chat.
I'll link to this.
I want to reiterate,
nothing that I was saying before is any
judgment of this guide in particular,
because again, I haven't read it.
Neither of us have read it.
I was going to say,
he might have been talking to me.
It certainly could have...
Good advice.
Right.
Uh, and, and I'll definitely check it out.
So thanks again for sharing.
Um, I just want to, that's,
that's just my experience based on other
guides and based on like what you,
how you described it.
Um,
I I've definitely seen guys like that
where yes,
that it's probably not the target audience
that we are trying to go for.
Um,
I kind of have a philosophy that like
being accessible and also
sort of being like a more public face
when it comes to all of this,
like obviously do this under my name,
for example,
and not a pseudonym like this.
I think that that is a difference in
approach and it reaches different people.
And I think that guides like that and
projects like Privacy Guides both serve
their own purpose.
But yeah,
for anything more than like just the basic
stuff that we have on our site,
that is the point of our form.
Because, yeah,
I really don't know if any of these
guides can really be everything for
everyone, right?
But I'm sure for a certain group of
people, that guide could be very good.
And I will definitely take a look at
it because I like to read other guides
out there.
Yeah,
I produced some of the articles on the
website.
I didn't go straight to the Bible.
I went to the Root website.
Some of it is pretty extreme,
like which countries don't have
extradition laws.
Which, no offense to this guy,
but if that's my threat model,
I'm not going to trust a random website
on the internet.
I'm going to talk to an actual lawyer.
But then others were like,
how to get started with ITP,
which I don't really have strong opinions
on ITP, but I don't know.
I'll peruse it.
I'll check it out later this weekend.
Some light reading for the weekend.
Yes.
Yes.
Well, Nate,
I think this probably about wraps things
up here.
As a quick reminder to everyone,
PrivacyGuides is a nonprofit.
We're dedicated to protecting our digital
rights.
If you want to support the show and
our mission,
a donation at privacyguides.org would be
much appreciated.
I want to thank Nate for joining me
this week.
Before we wrap up this broadcast here,
I want to deliver a quick message as
the program director of Privacy Guides
about the current state of the United
States of America.
As a Minnesotan and a resident of the
city of Minneapolis myself,
this is a very important issue to me.
We're only one month into twenty twenty
six right now.
And already this year,
ICE agents of the federal government of
the United States are responsible for the
extrajudicial killings of two American
citizens right here in my city for
exercising their constitutional rights.
This happened as part of a larger ICE
campaign to terrorize my neighbors and
this country,
which is a campaign that I know many
Minnesotans protested in force last week,
and I know many American patriots are
protesting today.
Our mission at Privacy Guides has always
been to support the right of privacy for
all people,
regardless of political views or the
country that people live in.
It's also our mission to speak out against
government overreach,
particularly when it comes to surveillance
and especially when government agencies
are being pitted against the very
taxpayers and citizens that they are meant
to protect and serve.
Here in the United States,
that's meant recently speaking out against
the Democrats who aim to increase
surveillance and censorship through bills
like COSA or the planned repeals of
Section two thirty.
But now against republican certainly in
our government,
who are weaponizing the state surveillance
systems and law enforcement bodies like
ice to target their perceived political
enemies and immigrant members of our
communities,
without respect to their legal residency
status or any due process.
this weaponization of ice by the trump
administration is not happening in a
vacuum it's fueled by the very
surveillance data and the lack of digital
boundaries that we have been fighting
against for years laws which were enacted
within my lifetime like the patriot act
and loopholes like the continued lack of
regulations against commercial data
brokers which allow the government to
bypass the fourth amendment entirely by
purchasing our own GPS and social media
data from tech companies to map out our
neighborhoods for raids.
Minneapolis has also become the testing
ground for invasive and inaccurate facial
recognition apps like Mobile Fortify,
where AI glitches,
just like we talked about in this episode,
can lead to unlawful detentions of
innocent people and the sort of
state-sponsored surveillance that took the
lives of Renee Nicole Goode and Alex
Peretti.
In times of overreach,
our greatest defense is our community and
our refusal to be intimidated into
silence.
And I've seen how powerful that this can
be firsthand.
The reality is that how ICE is operating
within the borders of the US is
unjustifiable.
So we here recognize the significance of
this unprecedented situation,
and we stand alongside everyone who's
protesting in support of the protection of
our neighbors and for American rights,
which is something that I think all
Americans should support.
Thank you all for tuning in.
I hope you all have an excellent weekend.