- No personal data processed is used for AI/model training. Data is exclusively used to confirm your identity.
- All biometric personal data is deleted immediately after processing.
- All other personal data processed is automatically deleted within 30 days. Data is retained during this period to help users troubleshoot.
- The only subprocessors (8) used to verify your identity are: AWS, Confluent, DBT, ElasticSearch, Google Cloud Platform, MongoDB, Sigma Computing, Snowflake
The full list of sub-processors seems to be a catch-all for all the services they provide, which includes background checks, document processing, etc. identity verification being just one of them.I have I've worked on projects that require legal to get involved and you do end up with documents that sound excessively broad. I can see how one can paint a much grimmer picture from documents than what's happening in reality. It's good to point it out and force clarity out of these types of services.
[1]: https://www.linkedin.com/feed/update/urn:li:activity:7430615...
I ended up deciding that I was getting no value from the account, and I heard unpleasant things about the company, so I deleted the account.
Within hours I started to get spam to that unique email address.
It would be interesting to run a semi-controlled experiment to test whether this was a fluke, or if they leaked, sold, or otherwise lost control of my data. But absolutely I will not trust them with anything I want to keep private.
I do not trust LinkedIn to keep my data secure ... I believe they sold it.
Was forced to verify to get access to a new account. Like, an interstitial page that forced verification before even basic access.
Brief context for that: was being granted a salesnav licence, but to my work address with no account attached to it. Plus I had an existing salesnav trial underway on main account and didn't want to give access to that work.
So I reluctantly verified with my passport (!) and got access. Then looked at all the privacy settings to try to access what I'd given, but the full export was only sign up date and one other row in a csv. I switched off all the dark pattern ad settings that were default on, then tried to recall the name of the company. Lack of time meant I haven't been able to follow up. I was deeply uncomfortable with the whole process.
So now I've requested my info and deletion via the details in the post, from the work address.
One other concern is if my verified is ever forced to be my main, I'll be screwed for contacts and years of connections. So I'll try to shut it down soon when I'm sure we're done at work. But tbh I don't think the issues will end there either.
Why do these services have to suck so much. Why does money confer such power instead of goodwill, integrity and trust/trustless systems. Things have to change. Or, just stay off the grid. But that shouldn't have to be the choice. Where are the decentralised services. I'm increasingly serious about this.
On the other hand it can be hard to escape if it's for something that actually matters. Coursera is a customer. You might want your course achievements authenticated. The Canada Media Fund arranges monies for Canadian creators when their work lines up with various government sponsored DEI incentives. If you're in this world you will surely use Persona as required by them. Maybe you're applying for a trading account with Wealthsimple and have to have your ID verified. Or you want to rent a Lime Scooter and have to use them as part of the age verification process.
KYC platforms have a place. But we need legal guarantees around the use of our data. And places like Canada and Europe that are having discussions about digital sovereignty need to prioritize the creation of local alternatives.
LinkedIn is full if so called professionals who make a living by leveraging their brand. If you‘re not one of them, leave
1. they are selling you as a target.
2. some people, governments, groups, whatever are willing to pay a lot of money to obtain information about you.
3. why would someone pay good money to target you unless they were going to profit from doing so. are they stupid? no.
4. where does that profit come from? If some one is willing to pay $100 to target you, how are they going to recoup that money?
5. From you.
There is simply no other way this can have worked for this long without this being true.
It is a long causal change, so it is fair to ask whether there is any empirical evidence. If this is true we would expect to see ...? Well how about prices going up? Well how about in general people are less able to afford housing, food, cars, etc.
I'm speculating here, but perhaps it is predictability. There is a common time warp fantasy about being able to go back and guess the future. You go back and bet on a sports game. If I can predict what you are going to do then I can place much more profitable bets.
Do the corporations that participate in this scheme provide mutual economic benefit? Do they contribute to the common wealth or are they parasitical?
No one likes to think they have parasites. But we all do these days.
> Let that sink in. You scanned your European passport for a European professional network, and your data went exclusively to North American companies. Not a single EU-based subprocessor in the chain.
Not sure LinkedIn is a European professional network.
It happened last week too, I was able to fix it via their chat-help (human). Yesterday, their chat-help (human) was not able fix it and has to open a ticket. I pay for LinkedIn-Premium. So maybe this is just a scam to route me into Verification. Their help documents (https://www.linkedin.com/help/linkedin/answer/a1423367) for verifying emails doesn't match the current user experience.
Then, in a classic tech-paradox, their phone support person told me they would email me -- on the same address their system reports emails are not getting through to. It felt like 1996 levels of understanding.
We need to get back to de-centralised.
And the content is the worst trash you'll find online, bottom of the barrel.
- that I just have "work email verified" and that there is a Persona thing I was not even aware of
- a post by Brian Krebs at the top of my feed, exactly on that topic: https://www.linkedin.com/posts/bkrebs_if-you-are-thinking-ab...
If LinkedIn asks me to verify then I'll just leave. I'd be very happy for it to fall over anyway so there is space for a new more ethical platform. Especially since Microsoft acquired it, all bets are off.
> Let that sink in. You scanned your European passport for a European professional network, and your data went exclusively to North American companies. Not a single EU-based subprocessor in the chain.
LinkedIn is an American product. The EU has had 20 years to create an equally successful and popular product, which it failed to do. American companies don’t owe your European nationalist ambitions a dime. Use their products at your own discretion.
Of course an American company is subject to American law. And of course an American company will prioritise other local, similar jurisdiction companies. And often times there’s no European option that competes on quality, price, etc to begin with. In other words I don’t see why any of this is somehow uniquely wrong to the OP.
> Here’s what the CLOUD Act does in plain language: it allows US law enforcement to force any US-based company to hand over data, even if that data is stored on a server outside the United States.
European law enforcement agencies have the same powers, which they easily exercise.
"Your European passport is one quiet subpoena away"
Why does the subpoena need to be quiet? If I search my chats with ChatGPT for the word "quiet", I get a ridiculous number of results. "Quietly this, quietly that". It's almost like the new em dash.
There's many others all over this blog post I won't bother calling out.
"Understanding what I actually agreed to took me an entire weekend reading 34 pages of legal documents."
Yeah I'll bet it did. Or it took an hour of back and forth with ChatGPT loaded up with those 34 pages.
I get it, we all use AI, but I'm just so tired of seeing the unmistakable mark of AI language all over every single thing. For some reason it just makes me think "this person is lazy". The CEO of a company my friend works for used Claude to write an important letter to business partners recently and we were all galled at her lack of awareness of how AI-sloppified the thing was. I guess people just don't care anymore.
On the other hand I see many people posting in official capacity for an organization without verification.
When they actively represent their current company but with a random verification from a previous one it gets pretty absurd.
In its current form LinkedIn verification is pretty worthless as a trust signal.
We regulated innovation out of the market. Why are you surprises that the only companies finding your data valuable are in the US?
There's so many angles of grind with this kind of thing that big tech has gradually normalised.
Does anyone else get the impression that they feel like the nefarious surveillance state is now real and definitely not for their benefit?
It's been a long running trope of the men in black, and the state listening to your phone calls, etc. Even after Snowdon's leaks, where we learned that there are these massive dragnets scooping up personal information, it didn't feel real. It felt distant and possibly could have been a "probably good thing" that is it was needed to catch "the real bad guys".
It feels different now. Since last year, it feels like the walls are closing in a bit and that now the US is becoming... well, I can't find the words, but it's not good.
The government should provide an API or interface to validate a user, essentially acting just like an SSO. Instead of forcing users to upload raw passport scans to a third-party data broker, LinkedIn should just hit a government endpoint that returns an anonymized token or a simple boolean confirming "yes, this is a real, unique person." It gives platforms the sybil resistance they need without leaking the underlying PII.
I've been documenting this pattern in AI apps specifically. The number of companies shipping to production with Firebase rules set to "allow read: if true" or Supabase databases with no Row Level Security is staggering. The identity data people hand over during verification often ends up in databases with zero access controls.
LinkedIn at least has a security team. Most AI startups shipping verification flows don't.
For each role I had described some of the tasks and accomplishments and this was used in the phishing message.
Since then, I removed my photo, changed my name only to initials and removed all the role-specific information.
It's a bit of a bummer as I'm currently in the process of looking for a new job and unfortunately having a LinkedIn profile is still required in some places, but once I find it, I'll delete my profile.
I guess I'll just be in the corner crossing my fingers none of it is found in a hostile foreign land or used against me.
Did you actually follow through with 1-4 and if so what was the outcome? how long did it take?
I was under the impression they just make database products. Do they have a side hustle involving collecting this type of data?
> Hesitation detection — they tracked whether I paused during the process
> They use uploaded images of identity documents — that’s my passport — to train their AI.
> Persona’s Terms of Service cap their liability at $50 USD.
> They also include mandatory binding arbitration — no court, no jury, no class action.
Every hiring process I've been through already requires proof of identity at some point. Background checks, I-9s, whatever it may be. So you're essentially handing your ID to a third party just to get a badge that doesn't skip any steps you'd have to do anyway.
The straight-from-LLM writing style is incredibly grating and does a massive disservice to its importance. It really does not take that long to rewrite it a bit.
I hope at least he wrote it on his local Llama instance, else it's truly peak irony.
> Here’s the thing about the DPF: it’s the replacement for Privacy Shield, which the European Court of Justice killed in 2020. The reason? US surveillance laws made it impossible to guarantee European data was safe.
> The DPF exists because the US signed an Executive Order (14086) promising to behave better. But an Executive Order is not a law. It’s a presidential decision. It can be changed or revoked by any future president with a pen stroke.
This understates the reality: the DPF is already dead. Double dead, two separate headshots.
Its validity is based on the existence of a US oversight board and redress mechanism that is required to remain free of executive influence.
1. This board is required to have at least 3 members. It has had 1 member since Trump fired three Democrat members in Jan 2025 (besides a 2-week reinstatement period).
2. Trump's EO 14215 of Feb 2025 has brought (among other agencies) the FTC - which enforces compliance with the DPF - under presidential supervision. This is still in effect.
Of course, everyone that matters knows this, but it doesn't matter, as it was all a bunch of pretend from day 1. Rules for thee but not for me, as always. But what else can we expect in a world where the biggest economy is ruled by a serial rapist.
You read and agreed with the terms explicitly stating the data would be used to do those things, and it was not at all necessary for you to do that. What else do you want? It seems like consent isn't the issue. You just don't like what this company does, and still volunteer your data for them to do just that. Now you regret it and write a blog post?
One thing is to be tricked or misled, or for a government to force your face to be scanned and shared with a third party. Another is to have terms explicitly saying this will be done, requiring explicit agreement, and no one forcing you to do it.
Hiding all this very important info (which literally affects the users life) behind an insignificant boring click! Even the most paranoid user will give up in certain use cases, (like with covid 19 which even though didn´t agree, you needed to travel, work making it compulsory). Every company that uses deciving techniques like this should be banned in Europe.
Less off topic -- there are some black hat marketers that (I think) buy or create verified profiles with attractive women, then they use the accounts for b2b sales through linkedin DMs. I find that amusing. Neutered corpo bois are apparently big poon hounds. Makes sense when you think about it -- that type of guy is craving female attention and probably does not have the balls to do anything in real life, so a polite DM from a fake linkedin thot would be appealing.
Is there anything special about a passport photo, or can that be done from any photo of your face?
Stop using LinkedIn, and stop using these terrible services that rip away our privacy.
I resolved everything except LinkedIn. They required Persona verification to restore access, but I'd already recently verified with Persona, so clicking the re-verification links just returned a Catch-22 "you've already verified with us." LinkedIn support is unreachable unless you're signed into an account. I tried direct emails, webforms, DMs to LinkedIn Help on Twitter, all completely ignored.
Eventually some cooldown timer must have expired, because Persona finally let me re-verify last week. Upon regaining access, I was encouraged me to verify with Persona AGAIN, this time for the verified badge.
I now have a taste of what "digital underclass" means, and look forward to the day when no part of my income depends on horrible platforms that make me desperate for the opportunity to give away my personal data!
Anyway, I found that too much of a hassle and switched to other LLM providers.
The need / demand for some verification system might be growing though as I’ve heard fraudulent job application (people applying for jobs using fake identities… for whatever reason) is a growing trend.
I gave in and verified. Persona was the vendor then too. Their web app required me to look straight forward into my camera, then turn my head to the left and right. To me it felt like a blatant data collection scheme rather than something that is providing security. I couldn't find anyone talking about this online at the time.
I ended up finding a job through my Linkedin network that I don't think I could have found any other way. I don't know if it was worth getting "verified".
---
Related: something else that I find weird. After the Linkedin verification incident, my family went to Europe. When we returned to the US, the immigration agent had my wife and I look into a web cam, then he greeted my wife and I by name without handling our passports. He had to ask for the passport of our 7 month old son. They clearly have some kind of photo recognition software. Where did they get the data for that? I am not enrolled in Global Entry nor TSA PreCheck. I doubt my passport photo alone is enough data for photo recognition.
Do we know how they get that? Because my fingerprints are also in there, so...
Well they made it. They conquered the recruitment scene and I can’t think of a company I’d wish had gone out of business sooner.
Am I wrong?
tl;dr Persona shares your identity data directly with the federal governments of the US and Canada and likely is sharing data/works with ICE on the same.
The OP is right. For that reason we started migrating all of our cloud-based services out of USA into EU data centers with EU companies behind them. We are basically 80% there. The last 20% remaining are not the difficult ones - they are just not really that important to care that much at this point but the long terms intention is a 100% disconnect.
On IDV security:
When you send your document to an IDV company (be that in USA or elsewhere) they do not have the automatic right to train on your data without explicit consent. They have been a few pretty big class action lawsuits in the past around this but I also believe that the legal frameworks are simply not strong enough to deter abuse or negligence.
That being said, everyone reading this must realise that with large datasets it is practically very likely to miss-label data and it is hard to prove that this is not happening at scale. At the end of the day it will be a query running against a database and with huge volumes it might catch more than it should. Once the data is selected for training and trained on, it is impossible to undo the damage. You can delete the training artefact after the fact of course but the weights of the models are already re-balanced with the said data unless you train from scratch which nobody does.
I think everyone should assume that their data, be that source code, biometrics, or whatever, is already used for training without consent and we don't have the legal frameworks to protect you against such actions - in fact we have the opposite. The only control you have is not to participate.
Actually Steve Blank has a great talk on the roots of Silicon Valley. SV basically built upon military tech meeting private equity. That's why it's wildly different than say Berlin startup scene, and their products are global and free.
In any case, I don't know how much more ad money they'll extract from knowing what I look like. Maybe beauty products?
It won't be long before we'll be required to verify ID for every major website.
Could never find any explanation why I was targeted by this - it said it detected “suspicious activity” but I only ever interacted with recruiters, and only occasionally. Supposedly it is deleted after if you don’t go all the way through, but I do not believe it. This data ends up in very weird places and they can go fuck themselves for it afaic.
> The reason? US surveillance laws […]
This slop in every blog post? Fucking tiresome.
So it was nothing special for me.
Also, the content on LinkedIn is terrible and fake.
Need to start shunning these bad actors.
LinkedIn is a social network and I wish there was an alternative.
I never understand why people supply too much info about themselves for small gains.
People at LinkedIn wants you to believe that your career is safe if you play by their games, but ironically they are one of the main reasons why companies nowadays are comfortable with hiring and firing fast.
Last year I was trying to setup a business LinkedIn page for SEO purposes, which meant I also had to create a personal account. After being told several times that I absolutely need to scan my ID card with that dodgy app I simply replied that I can't do it due to security concerns. After several weeks they unlocked my account anyway, but I suspect this would not happen if algorithms determined that I actually needed that account to find a job and pay my bills.
https://en.wikipedia.org/wiki/Paravision_(identity_verificat...
What this user missed is the affidavit option: you can get a piece of paper attested by a local authority and upload that instead, if you really really need a LinkedIn verified account.
Microsoft can go jump.
Aside from their AI-slopped newsfeed (F@#$!!!) which should have died long ago, this is atrocious. "Enshittification" was created just for this. Sorry, I got sidetracked.
Isn't there anyone from LinkedIn here??
It’s truly a shame we are allowing these companies to steal and share and abuse our personal data, and it’s even worse that even the very basics of that data are so often blatantly wrong.
That's quite cool, it means that soon models will be able to create a fake ID photos with real data.
I'm so excited about it! /s