This probably already exists

…and is why no text on the Internet after 2017-2018 can be trusted to be from a real person, rather than an LLM bot farm.

Method

Uses

Target manipulation

Target specific individuals or broader demographics by fine-tuning using email history and other information, and test their responses to different angles. Use the model to actually get an idea of how they might feel about a topic, how they’re best manipulated, where their vulnerabilities are and so on.

Chat with the model, train it on responses.

Use cases here:

Target reconnaissance

Use it to figure out the chance that a person belongs to specific demographic groups, or go meta on this and use it to find the best questions to ask that cause the target to reveal the most amount of information about themselves.

This can then be used as a way to find hunted people in a list of candidates, so is likely very useful for military intelligence. Extracting info from targets could be used to get blackmailable information from large numbers of people across an economy, like presumably how porn sites are used by Western powers.

Troll farms

Pick tons of people in the distributions you want to emulate, generate back stories and use bots to post online using their specific point of view. We should be able to write a complaint from the point of view of a 32 year old nurse who is a single mother, born in Manchester, with one brother etc etc.

Repeat this a million times, you’ve got an army of different personalities that can be used to push any agenda.

Bot and owner detection

Actual people are likely out-of-distribution and far less predictable than text generated by a language model. Text created by a langauge model will be in the distribution of its data sets.

Those who have the largest data sources and know what data sources other people should be able to detect variance from known data collections, and not only figure out the difference between a bot and real person, but also who is running the bots.

Corporate sabotage

Knowing how you can manipulate decision makers, it ought to be possible to not only make them buy products that aren’t the best choice for them, but to trick them into making decisions that trap them and cause problems.

This could be as simple as using multiple users to ask for a product that’s too expensive to make or maintain, be missold products or services that aren’t right for them, overstretch themselves, change direction at the wrong moment / false pivots.

By investing a bit in minimal services from a company, false demographics could be created, then rug pulled later.

Uses:

Sybil and asymmetric warfare attacks

Contain, control and influence people, or waste their resources by surrounding them with false personas that exist to take interaction time from real people.

Impersonation and fraud

Personalized attacks by impersonating the writing style and backrgound context of their real contacts.