Rules for theft

I give away a lot. Free expert advice.
At Forrester, we charged for content and advice. I found that constraining. I’d rather just give it away and help people.
You might think that if you give things away, you lose control over them. Certainly, I can’t stop people from stealing what I give away. But there are norms and there are rules.
In the age of AI, though, those rules are fuzzy.
What’s allowed
You can read what I write.
You can act on what I write.
You can link to what I write.
You can quote me accurately and use my name. No more than 150 words at a time, please.
You can build on my ideas if you give me credit for those ideas.
You can summarize my ideas if you give me credit.
You can disagree with my ideas if you describe them accurately first and give me credit.
What’s not allowed
You can’t quote me without citing me by name. That’s plagiarism.
You can’t quote pages at a time. That’s plagiarism.
You can’t paraphrase what I wrote without attribution. That’s plagiarism, too.
You can’t use my ideas without mentioning who created them. That’s a gray area — how many people write about disruption without mentioning Clay Christensen? — but in honest discourse, people give credit to the creators of ideas.
You can’t attach my name to ideas that I had little to do with. That’s where Grammarly went wrong.
You can’t misinterpret my ideas and use that interpretation to criticize them. There will be endless arguments about this, but in the end, if you cite my ideas, you should describe them in ways I’d agree with.
You can’t use my picture and use synthesized video to make it seem like I’m doing things I never did. Well, you can, but you really shouldn’t.
What’s in dispute
Can your AI hoover up my content and dispense advice based on it, with credit? The law says yes. Perhaps the law should be changed.
Do you have to give credit? Some advice is pretty much in the public domain, like “think long-term.” There will be many hard calls about this.
What’s the penalty for getting my ideas a little wrong, or a lot wrong?
Can your AI simulate my thinking without my permission? I find that when you ask an LLM “What would Josh Bernoff says about . . .,” it’s usually wrong. That’s annoying. But suppose it got more accurate. Is that a violation of my rights? Is simulating me allowed? (Ask the comedians who do Donald Trump impressions.)
If you use an AI to generate text without realizing that the text is based on another person’s ideas, is that wrong? If so, who’s in the wrong, the user or the people who created the AI?
The problem
Our norms have never been tested in a world where vast amounts of content are freely available and machines can gather all of that up and act on it.
Nobody has a problem if a person reads a book and uses it to get smarter and act more intelligently.
But a lot of us have a problem if a machine reads a bunch of blog posts and uses them to “get smarter” and answer people’s questions.
Grammarly crossed the line when it generated criticisms and attributed them to people’s names.
Anthropic crossed the line when it trained on copyrighted books by breaking ebook encryption. But as far as the law is concerned, if it trained on the same books with content from reading scanned pages, that’s not illegal. At least, it hasn’t been ruled illegal yet.
The tech is not just moving faster than the law. It’s moving faster than our intuition.
By default, tech companies “move fast and break things.” That’s one way to find the boundaries of propriety.
But is it really the best way?