You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be amazing if you could add an option to the miner to just output the status bar data to a text file or stdout. That could make it so a hiveOS wrapper could be made. The wrapper would grep the output at 10 second intervals to get data.
As a stretch goal it would be SUPER awesome if you could run an http server and server out json. If I'm not being too greedy if you could output what each core is doing hashrate wise that would make the implementation to be complete.
Thanks and happy to provide any assistance that i can.
The text was updated successfully, but these errors were encountered:
I can add an optional feature to the miner to expose current data (blocks accepted / blocks rejected), threads count, and also returns the current total hashrate.
Is it needed to have the hashrate of each threads ?
That would be great. The hashrate of each thread is not necessary. In Hive OS a process calls a wrapper layer bash script that needs to return data in json format. Typically the wrapper layer will just grep into the log file to get this info, which can be done, but it sure would be nice if a call to a local socket could be done and the data formatted in json can be retrieved. However a streaming log is fine as well. :-)
It would be amazing if you could add an option to the miner to just output the status bar data to a text file or stdout. That could make it so a hiveOS wrapper could be made. The wrapper would grep the output at 10 second intervals to get data.
As a stretch goal it would be SUPER awesome if you could run an http server and server out json. If I'm not being too greedy if you could output what each core is doing hashrate wise that would make the implementation to be complete.
Thanks and happy to provide any assistance that i can.
The text was updated successfully, but these errors were encountered: