"Will" is indeed an intriguing research theme in Neo-Cybernetics.
While it's a different perspective from the question of what "will" is, I believe there's also a discussion on how will should be. Personally, I think both individuals and societies should be more aware of their will, making short-term decisions based on a wider range of societal variables and a more extended future outlook. I occasionally wrote about this idea, referring to it as "Willism" in my articles.
However, I've started to think that overemphasizing will may not be beneficial. This is because I came across a Japanese blog where someone advocated for the idea that not believing in free will could lessen one's mental burden. Taking full responsibility for one's decisions was a given for me, but I've realized that for some people, this might not lead to well-being.
Therefore, while I believe it's essential for individuals to clarify their will, it may be necessary to limit this to a range that suits each person's abilities and personality. My fundamental vision of the future is that as AI takes on intellectual labor, it will become an era of will. However, efforts to enhance the capacity to take on the responsibility of will, and for communities or societies to bridge the individual differences in this capacity (like social security in economics), might also be necessary in such an era.