Hello sandroalvesbrasil,
"I found the information that I can execute the RVC command (vsan.check_limits) and check if any of the percentages are next or above 80%."
vsan.disks_stats <pathToCluster> is best summary view in my opinion - Cluster > Monitor > Disk Management in the GUI is close-second.
"I want to see these percentages to understand why this occurred, given that our environment has not undergone any changes."
Sure there were changes - you stated just 2 lines above this that you put a node into Maintenance Mode - which option you used, how long it was in MM and what other things you did can affect how data is distributed. For example in a 4-node cluster with default RAID1 FTM if you place a node in MM with 'Ensure Accessibility' option with default clom repair delay timer all the data will start rebuilding on the remaining 3 nodes after 60 minutes and thus if this node is not taken out of MM until hours later then all the data on it will be removed (as it is stale and has been rebuilt in its absence) thus the disks will all be 0% used compared to the now higher than they were before % used on the other nodes. Additionally, data on a cluster is rarely static and can grow over time causing other knock-on changes.
"1 - Is there any other possibility besides what I found as (HW Failures / Removals, Host being put into maintenance mode or Incorrect policy change)?"
Have a look at which disks are imbalanced and by how much via the Disk balance Health check - it should be fairly obvious the cause based on what % used disks are and on which hosts (e.g. everything got rebuilt while a host was in MM) but yes other causes are possible e.g. a disk being marked as failed or controller freaking out, data rebuilt elsewhere then the disk/controller starts functioning normally again and data gets placed there (either from VM/vmdk creation, proactive rebalance or reactive rebalance due to disks over 80% used).
"3 - If we have never done the rebalancing, can I lose data? Can my environment stop?"
No, that shouldn't cause issues as reactive rebalancing occurs without user-intervention which starts moving data off disks with 80% (default value) or more used onto lower % used devices where possible - as with any system though, space-management is important and you should be sizing your clusters adequately.
"4 - In the VMware documentation says that I can disable the alarm. I looked and the error we received was yellow, that is, it is not critical. Even without proactive rebalance enabled, rebalancing will occur when it reaches 80% (vsan.check_limits)?"
There is no red condition for Disk-Balance AFAIK as it is not going to negatively impact anything other than potentially missing out on performance gains from more devices.
being actively and equally used in the cluster. If your disk-usage becomes imbalanced due to changes then simply rebalance it.
"5 - Does proactive rebalance say it will start whenever% (vsan.check_limits) reaches 30% of disks?"
Proactive rebalance does not start automatically ever (as the name implies!) - it has to be initiated via the Health check in the GUI or via RVC.
Bob