12-28-2021, 01:00 PM
Is there a way to compile user submitted pinephone battery life benchmarks in terms of configuration so we can zero in on configurations that are effective at conserving power and also meet our individual use requirements?
Spitballing...
We have a large user base of mostly homogeneous hardware and a finite number of distributions.
The benchmark number is hours:minutes starting at the transition from >90% to <=90% until the transition from >10% to <=10%.
Or whatever existing battery benchmark is chosen to standardize on. Doesn't matter as much as being consistently applied across all test configurations.
The variable (what would be the ranked list of CPU models in a processor benchmark) is the configuration (hardware version, distribution version, interface version, settings, running programs). It will be a very long list of different configurations with most ranked configurations being a single test. Some people may have identical configurations. The chart would display the average results, but clicking the configuration category would expand the list to show individual tests for that config category.
We would need a script to record the current config, run the benchmark over the better part of a day, watching for invalidating events such as user input, shutdowns, change of config while running the benchmark. It would stay active throughout the benchmark period to verify no user interaction and no configuration changes mid run.
I have no experience with this, maybe it is already a thing. I'd just like a broad picture of how config choices affect battery life. I think users would be willing to set aside productive use of their pinephones for a single day to contribute their piece of data to the big picture.
Spitballing...
We have a large user base of mostly homogeneous hardware and a finite number of distributions.
The benchmark number is hours:minutes starting at the transition from >90% to <=90% until the transition from >10% to <=10%.
Or whatever existing battery benchmark is chosen to standardize on. Doesn't matter as much as being consistently applied across all test configurations.
The variable (what would be the ranked list of CPU models in a processor benchmark) is the configuration (hardware version, distribution version, interface version, settings, running programs). It will be a very long list of different configurations with most ranked configurations being a single test. Some people may have identical configurations. The chart would display the average results, but clicking the configuration category would expand the list to show individual tests for that config category.
We would need a script to record the current config, run the benchmark over the better part of a day, watching for invalidating events such as user input, shutdowns, change of config while running the benchmark. It would stay active throughout the benchmark period to verify no user interaction and no configuration changes mid run.
I have no experience with this, maybe it is already a thing. I'd just like a broad picture of how config choices affect battery life. I think users would be willing to set aside productive use of their pinephones for a single day to contribute their piece of data to the big picture.