I have recently written a half-baked idea into a book chapter for a book that is currently with the publishers. This makes me very nervous because it is an extremely public way to test out how something lands!
Technically, the whole book could be considered a half-baked idea, but most of the chapters contain information I have known and/or worked with for a while. The one exception is the chapter I wrote on AI, because one of the reviewers felt that I really needed to address that topic. I think they were right, but I’ve also been avoiding AI like my life depended on it, so I felt both unqualified and somewhat hypocritical discussing it. I did a bunch of research, so I’d like to think it is reasonably well informed, but I am very aware that the spin I put on that information is relatively half-baked. Even in the chapter itself, I admit that I will probably have to get over my own reluctance and begin to engage with AI more, which will undoubtedly change my opinion.
It feels profoundly uncomfortable to release those statements out into the world, but, as indicated in this and other lessons in this course, you can’t progress your thinking if you don’t subject yourself to other perspectives!
This sounds quite courageous, Caitlin! Like you, I feel unqualified to talk with any authority on Generative AI (though I have written a few posts) and that is another reason to fold this workshop, as I know it will need more of a focus on AI and I am not willing to invest the effort needed for that. I will turn 67 this month and I will be focusing on other aspects of my life in the coming years. The ‘enshittification’ of the internet and the commerce that rides upon it is getting to be too much for my liking.
While AI is interesting and undoubtedly useful in many ways, I can definitely see where you’d prefer to use your time on other things. I hope you will be spending time in beautiful places, doing relaxing and creative things, away from a computer screen!
Caitlin – I appreciate your explicit share – “It feels profoundly uncomfortable to release those statements out into the world, but, as indicated in this and other lessons in this course, you can’t progress your thinking if you don’t subject yourself to other perspectives!”
My half-baked idea is around the concept of “AI”. My position is that we would all benefit greatly from using it more carefully and precisely or as little as possible. When I see “AI” used as a actual thing, it makes me itchy. Or Aitchy. Those who want to talk about the “AI” that is anything and everything benefit from vagueness. It feels that sometimes when people are talking about AI they are talking about the cult(ure) of AI these days. To me, this is separable from the underlying technology or systems both historically and currently. On reference for me is Christina Wodke’s [I Love Generative AI and Hate the Companies Building It | by Christina Wodtke | Medium](https://cwodtke.medium.com/i-love-generative-ai-and-hate-the-companies-building-it-3fb120e512ac)
Having just read ‘I Love Generative AI and Hate the Companies Building It’, I cannot see where the love is. All the cited examples are of human and environmental exploitation. How can you love a technology that is NEVER used ethically?
Thanks, Donald, for Wodtke’s reference, and I also will give it a read.
I’m curious, if you’d care to share, among the many points Wodtke raises, what is one(s) that you similarly align or even not align?
Admittedly, I am a luddite in the “AI” landscape (e.g., traditional, generative, agentic) and sparingly use. However, I expect to do so for client work in the environmental/energy sector beyond existing traditional AI-driven models – looking at genAI for regulatory compliance and strategies. (Full disclosure and bias I have – I included Altman as my “propaganda” example – https://pkm.jarche.ca/8-fake-news/comment-page-1/#comment-2294 – and I do not use OpenAI’s products.)
(Harold, I used my aggregator Vienna, adding Wodtke’s website, Elegante Hack, https://eleganthack.com/. And while I don’t know if I’ll keep it among my feeds, I’m glad I remembered having Vienna – it’s a start!)
As a strategy and organizational consultant, I started working on AI a little before the ChatGPT boom, comparing different technologies, and so on. The use case was very interesting: helping social and healthcare workers avoid ableism.
Since then, things have really picked up speed: as a curious user, I started experimenting, and as a professional, it’s an unavoidable topic in any project, so I’ve had to learn a lot.
In addition, I’ve also learned because friends who work at companies developing AI told me they needed someone with my background, since 80% of the projects had nothing to do with technology, but rather with knowledge management processes, project/platform and data governance, etc. So I have a fairly realistic view of the state of the technology and the sector.
The most important thing to keep in mind, IMHO, is that big tech companies are dominating the conversation, selling a potential that doesn’t exist to justify investments that no one knows how they’ll ever pay off. The second most important thing is that even though AI can’t replace employees (the dream Silicon Valley sells to business owners and investors), it can do many things—and do them very well. It never ceases to amaze me, even if these capabilities are more modest than the absurd promises made by big tech. The third point is that, for various reasons, China has a very rich open-source ecosystem and a more pragmatic approach to AI than Silicon Valley.
My friends of @maximalismo@masto.es put it very well:
Ten years ago AlphaGo defeated Lee Sedol.
Most people thought the lesson was about intelligence.
It wasn’t.
The real lesson was that once a problem becomes computable, the whole economic structure around it changes.
That’s exactly what is happening now with software.
For decades software production required:
• large teams
• specialized engineers
• significant capital
AI is dissolving those constraints.
There are millions of pieces of software that should exist but don’t.
Internal tools. Dashboards. Small applications.
They never get built because development is too expensive.
AI is about to unlock that latent demand for software.
Here’s the paradox:
AI is one of the most capital-intensive technologies ever created.
Yet its first major social effect may be the opposite:
reducing the amount of capital required to create things.
Less scale.
More scope.
AI is ultimately trained on the accumulated knowledge of society.
It is a statistical form of socialized knowledge.
Which means its future depends less on algorithms…
and more on the kind of society we decide to build.
Of course! @maximalismo@masto.es are heavily influenced by sci-fiction. By the why, I am meeting Cory Doctorow in Barcelona next Friday, my friend Simona hosts his latest book presentation. So excited!
Toni, Your half-baked idea is thought-provoking. Where you share – in the closing, I definitely understand this –
AI is ultimately trained on the accumulated knowledge of society.
It is a statistical form of socialized knowledge.
Which means its future depends less on algorithms…
and more on the kind of society we decide to build.
— and appreciate your “… more on the kind of society we decide to build” – which one hopes/I hope the risks of bias, marginalization, etc. is reduced/eliminated amidst power/capitalism (not getting on soap box – LOL).
My half-baked idea, which is from the previous Oct 2025 workshop:
“The absence of one’s PKM is the disregard of one’s lived experiences.”
Harold, I noted your reply, which I liked lots –
“In other words, the absence of conscious sensemaking methods is a disregard for our lived experiences.”
– – –
In Nov 2025, I posed “What is the absence of PKM?” among a community of folks involved in note-making, thinking, and PKM. My half-baked idea is that personal knowledge management primarily begins with being engaged in living among others. Secondary, one can choose whether and how to record, organize, access, and share such knowledge as part of having a “system” of sorts e.g., tools, technology, protocols, procedures.
Replies through December 2025 included folks not being able to explain what PKM means to them, that PKM is a “practice” around the tools and workflows for thinking, key is being able to organize and visualize information to readily access, and a perspective of “collective mismanagement of ignorance.”
I’ve reposed that question to the same community, so I’m curious what emerges.
Meanwhile, the question continues to live rent free in my head. LOL.
To make fun of him, It is often repeated that Socrates says (he does particularly in Phaedrus) that he distrusts writing because it fixes words in contrast to a living dialogue, and that it causes you to lose your memory because you do not need to exercise it as much (and we know that Socrates left nothing written). The reality is more nuanced: he says that writing is a pharmakon, which in ancient Greek meant both cure and poison (that is, a cure for forgetfulness and poison for memory). I think that this is implied in McLuhan, so it is true for any new media.
My half-baked idea:
We now know how disruptive the invention of writing was in terms of knowledge production, and I think that Internet helped to share this knowledge production so more rich connections happened. And now AI makes knowledge even more available but also reasoning skills (through a simulation of reasoning, because it is not an actual reasoning, as a “fixed” written conversation is not the same than an alive conversation and memory ).
Ahh, Tony – I see your proper half-baked idea here (which I missed) and does make me think. Let’s hope “reasoning skills” will evolve to discern “knowledge” and that ignorance does not prevail.
I have recently written a half-baked idea into a book chapter for a book that is currently with the publishers. This makes me very nervous because it is an extremely public way to test out how something lands!
Technically, the whole book could be considered a half-baked idea, but most of the chapters contain information I have known and/or worked with for a while. The one exception is the chapter I wrote on AI, because one of the reviewers felt that I really needed to address that topic. I think they were right, but I’ve also been avoiding AI like my life depended on it, so I felt both unqualified and somewhat hypocritical discussing it. I did a bunch of research, so I’d like to think it is reasonably well informed, but I am very aware that the spin I put on that information is relatively half-baked. Even in the chapter itself, I admit that I will probably have to get over my own reluctance and begin to engage with AI more, which will undoubtedly change my opinion.
It feels profoundly uncomfortable to release those statements out into the world, but, as indicated in this and other lessons in this course, you can’t progress your thinking if you don’t subject yourself to other perspectives!
This sounds quite courageous, Caitlin! Like you, I feel unqualified to talk with any authority on Generative AI (though I have written a few posts) and that is another reason to fold this workshop, as I know it will need more of a focus on AI and I am not willing to invest the effort needed for that. I will turn 67 this month and I will be focusing on other aspects of my life in the coming years. The ‘enshittification’ of the internet and the commerce that rides upon it is getting to be too much for my liking.
While AI is interesting and undoubtedly useful in many ways, I can definitely see where you’d prefer to use your time on other things. I hope you will be spending time in beautiful places, doing relaxing and creative things, away from a computer screen!
Thank you, Caitlin. We are off to Spain in mid-April for our 2026 adventure — Malaga, Tarifa, Cadiz, and Cordoba.
Caitlin – I appreciate your explicit share – “It feels profoundly uncomfortable to release those statements out into the world, but, as indicated in this and other lessons in this course, you can’t progress your thinking if you don’t subject yourself to other perspectives!”
My half-baked idea is around the concept of “AI”. My position is that we would all benefit greatly from using it more carefully and precisely or as little as possible. When I see “AI” used as a actual thing, it makes me itchy. Or Aitchy. Those who want to talk about the “AI” that is anything and everything benefit from vagueness. It feels that sometimes when people are talking about AI they are talking about the cult(ure) of AI these days. To me, this is separable from the underlying technology or systems both historically and currently. On reference for me is Christina Wodke’s [I Love Generative AI and Hate the Companies Building It | by Christina Wodtke | Medium](https://cwodtke.medium.com/i-love-generative-ai-and-hate-the-companies-building-it-3fb120e512ac)
Thanks for the link, Donald. I will check it out. My issues with Gen AI are much more issues with techno-capitalism.
Having just read ‘I Love Generative AI and Hate the Companies Building It’, I cannot see where the love is. All the cited examples are of human and environmental exploitation. How can you love a technology that is NEVER used ethically?
https://mastodon.social/@harold/116194066505805472
Thanks, Donald, for Wodtke’s reference, and I also will give it a read.
I’m curious, if you’d care to share, among the many points Wodtke raises, what is one(s) that you similarly align or even not align?
Admittedly, I am a luddite in the “AI” landscape (e.g., traditional, generative, agentic) and sparingly use. However, I expect to do so for client work in the environmental/energy sector beyond existing traditional AI-driven models – looking at genAI for regulatory compliance and strategies. (Full disclosure and bias I have – I included Altman as my “propaganda” example – https://pkm.jarche.ca/8-fake-news/comment-page-1/#comment-2294 – and I do not use OpenAI’s products.)
(Harold, I used my aggregator Vienna, adding Wodtke’s website, Elegante Hack, https://eleganthack.com/. And while I don’t know if I’ll keep it among my feeds, I’m glad I remembered having Vienna – it’s a start!)
As a strategy and organizational consultant, I started working on AI a little before the ChatGPT boom, comparing different technologies, and so on. The use case was very interesting: helping social and healthcare workers avoid ableism.
Since then, things have really picked up speed: as a curious user, I started experimenting, and as a professional, it’s an unavoidable topic in any project, so I’ve had to learn a lot.
In addition, I’ve also learned because friends who work at companies developing AI told me they needed someone with my background, since 80% of the projects had nothing to do with technology, but rather with knowledge management processes, project/platform and data governance, etc. So I have a fairly realistic view of the state of the technology and the sector.
The most important thing to keep in mind, IMHO, is that big tech companies are dominating the conversation, selling a potential that doesn’t exist to justify investments that no one knows how they’ll ever pay off. The second most important thing is that even though AI can’t replace employees (the dream Silicon Valley sells to business owners and investors), it can do many things—and do them very well. It never ceases to amaze me, even if these capabilities are more modest than the absurd promises made by big tech. The third point is that, for various reasons, China has a very rich open-source ecosystem and a more pragmatic approach to AI than Silicon Valley.
My friends of @maximalismo@masto.es put it very well:
Ten years ago AlphaGo defeated Lee Sedol.
Most people thought the lesson was about intelligence.
It wasn’t.
The real lesson was that once a problem becomes computable, the whole economic structure around it changes.
That’s exactly what is happening now with software.
For decades software production required:
• large teams
• specialized engineers
• significant capital
AI is dissolving those constraints.
There are millions of pieces of software that should exist but don’t.
Internal tools. Dashboards. Small applications.
They never get built because development is too expensive.
AI is about to unlock that latent demand for software.
Here’s the paradox:
AI is one of the most capital-intensive technologies ever created.
Yet its first major social effect may be the opposite:
reducing the amount of capital required to create things.
Less scale.
More scope.
AI is ultimately trained on the accumulated knowledge of society.
It is a statistical form of socialized knowledge.
Which means its future depends less on algorithms…
and more on the kind of society we decide to build.
Toni, your final comment reminds me of the book, Makers https://en.wikipedia.org/wiki/Makers_(novel)
Of course! @maximalismo@masto.es are heavily influenced by sci-fiction. By the why, I am meeting Cory Doctorow in Barcelona next Friday, my friend Simona hosts his latest book presentation. So excited!
I am looking forward to attending a June event (here in California) that Cory Doctorow will be the guest – https://www.keplers.org/upcoming-events-internal/cory-doctorow-2026 – that is hosted by colleague Angie. 🙂
Toni, Your half-baked idea is thought-provoking. Where you share – in the closing, I definitely understand this –
AI is ultimately trained on the accumulated knowledge of society.
It is a statistical form of socialized knowledge.
Which means its future depends less on algorithms…
and more on the kind of society we decide to build.
— and appreciate your “… more on the kind of society we decide to build” – which one hopes/I hope the risks of bias, marginalization, etc. is reduced/eliminated amidst power/capitalism (not getting on soap box – LOL).
My half-baked idea, which is from the previous Oct 2025 workshop:
“The absence of one’s PKM is the disregard of one’s lived experiences.”
Harold, I noted your reply, which I liked lots –
“In other words, the absence of conscious sensemaking methods is a disregard for our lived experiences.”
– – –
In Nov 2025, I posed “What is the absence of PKM?” among a community of folks involved in note-making, thinking, and PKM. My half-baked idea is that personal knowledge management primarily begins with being engaged in living among others. Secondary, one can choose whether and how to record, organize, access, and share such knowledge as part of having a “system” of sorts e.g., tools, technology, protocols, procedures.
Replies through December 2025 included folks not being able to explain what PKM means to them, that PKM is a “practice” around the tools and workflows for thinking, key is being able to organize and visualize information to readily access, and a perspective of “collective mismanagement of ignorance.”
I’ve reposed that question to the same community, so I’m curious what emerges.
Meanwhile, the question continues to live rent free in my head. LOL.
To make fun of him, It is often repeated that Socrates says (he does particularly in Phaedrus) that he distrusts writing because it fixes words in contrast to a living dialogue, and that it causes you to lose your memory because you do not need to exercise it as much (and we know that Socrates left nothing written). The reality is more nuanced: he says that writing is a pharmakon, which in ancient Greek meant both cure and poison (that is, a cure for forgetfulness and poison for memory). I think that this is implied in McLuhan, so it is true for any new media.
My half-baked idea:
We now know how disruptive the invention of writing was in terms of knowledge production, and I think that Internet helped to share this knowledge production so more rich connections happened. And now AI makes knowledge even more available but also reasoning skills (through a simulation of reasoning, because it is not an actual reasoning, as a “fixed” written conversation is not the same than an alive conversation and memory ).
Ahh, Tony – I see your proper half-baked idea here (which I missed) and does make me think. Let’s hope “reasoning skills” will evolve to discern “knowledge” and that ignorance does not prevail.