Product Idea: USB Flash Drives
#61
(04-01-2021, 05:31 PM)barray Wrote: I understand the point about GitHub, but it's also quite popular. GitLab is not nice to maintain for sure (as is most large web applications), but the one hosted at https://gitlab.com would be sufficient.

Quite frankly, I'd like the most a simple Git or Subversion repository hosted on our own, together with cgit or ViewVC, and accessed using everyone's favorite tools.  Optionally, there could be a read-only replica of the repository on GitHub.  By the time we get to writing actual code, I should be able to provide a hosted repository.

(04-01-2021, 05:31 PM)barray Wrote: I don't think it replaces a human, but it for sure ensures that at least the simplest of the bugs can be caught early on. I would see several layers of testing anyway, regression testing (done on build), file transfer testing (a script that can be run) and manual testing (use the stick for a week with some code before declaring a release).

I agree.  Automated testing has its place and should be implemented, there's no doubt about it, but it cannot entrirely replace humans when it comes to testing a hot-plug hardware device and its firmware.

(04-01-2021, 05:31 PM)barray Wrote: I did link it: https://wiki.pine64.org/index.php?title=...oldid=9632 but then it got yeeted from the front page: https://wiki.pine64.org/index.php?title=...oldid=9634

Apparently they want something more fledged out before adding it and there is no real consensus about exactly how it would be presented.

I'll see what I can do about editing my original comment to point to the wiki.

Yes, I saw your edit to the main wiki page and its later revert.  I agree that an "imaginary product" doesnt't have its place on the main wiki page in its current layout, but the PineFlash wiki page needs to be linked somewhere.  However, at the moment I'm really not sure how to tweak the layout of the main wiki page to achieve that.

The link to the PineFlash wiki page you've added to the inital post in this thread looks good to me.

(04-01-2021, 05:31 PM)barray Wrote: I think we will cross that bridge when we come to it - I suspect manual testing will be good for quite a while. I imagine most of our bugs will come from more humble origins and will be more easily reproduced.

Absolutely, a lot of work and effort will need to go into putting together something that can actually be tested in any way.

(04-01-2021, 05:31 PM)barray Wrote: Yeah, we would need something to disconnect all the lines and that doesn't unintentionally add capacitance (which at high speed is a large ask). Let's just not explore this manual disconnect unless we absolutely have to... Again, I think manual disconnect will be sufficient for the time being.

Totally agreed.  Quite frankly, when it comes to automating the disconnect, it would be actually easier to build a robotic arm that would physically plug the device in and out of a USB port instead of a human. Smile

(04-01-2021, 05:31 PM)barray Wrote: Any memory we are using we have control. If we want, we could fit the device with a beefy cap and continue running despite being powered off - the hardware is entirely in our control. And also there are filesystems that are robust to random disconnects, like ZFS for example. We won't have to re-invent the wheel on this one.

Umh, I'm not sure about that.  Even simple Micro SD cards are rather out of control when it comes to knowing that a write operation reported as successful has actually reached the underlying flash memory, which would make it possible to randomly cut the power or yank the card and lose no data.  Even rebooting the host can be problematic, because in many cases the reboot must include power cycling of the Micro SD card.  If the host uses the card at higher speeds and 1.8 V, the card must be power cycled to become initializable and usable by the boot loader after the reboot.

Letting "smart" flash do its job for a certain period of time (with an additional supercap or anything else) and hoping that data actually reaches the flash still doesn't provide the required level of data reliability.  It would be a hit-and-miss approach, which would only provide a false sense of reliability.  To have reliable storage, in our case we need underlying flash that is dumb, in the sense of being raw and having no integrated logic.  In the case of raw flash, having a write operation that is reported back as successful actually means reaching the flash with the written data, which is what we need.

Having ZFS as an example, one of its requirements to provide data reliability is to have nothing between the filesystem and the actual drives.  In other words, no hardware RAID controllers (although there are actually no hardware RAID controllers) may be used, only raw drives.  Sure, drives also provide write caching on their own, which is a potential issue, but they can also reliably be told to flush the caches or to comply to a write barrier.  It took years to flesh out all kinds of bugs related to the management of write caches on HDDs.
  Reply
#62
(04-01-2021, 09:20 PM)dsimic Wrote: Quite frankly, I'd like the most a simple Git or Subversion repository hosted on our own, together with cgit or ViewVC, and accessed using everyone's favorite tools.  Optionally, there could be a read-only replica of the repository on GitHub.  By the time we get to writing actual code, I should be able to provide a hosted repository.

cgit looks cool, will have to check that out. I don't think it really matters all too much though and bare in mind, if our project suddenly gets referenced to by say Reddit and the server experiences a massive load, somewhere like GitLab would be able to handle it whereas our servers would be crushed. I understand the want to control the entire stack, but ultimately the decentralized nature of git means we'll have our own copies of the code in the worst case anyway.

Also, nobody hangs out in my self-served IRC channel especially for this project Sad I guess only when we are official will Pine host a channel for us.

(04-01-2021, 09:20 PM)dsimic Wrote: Yes, I saw your edit to the main wiki page and its later revert.  I agree that an "imaginary product" doesnt't have its place on the main wiki page in its current layout, but the PineFlash wiki page needs to be linked somewhere.  However, at the moment I'm really not sure how to tweak the layout of the main wiki page to achieve that.

Apparently nobody else really has any idea where to put it either. Their main concern if I understand correctly is that people believe it is some official Pine product ready to go into production. Normally the discussions we're having happen behind closed doors at pine when they are first designing out the hardware anyway. This would be an entirely new process for them.

(04-01-2021, 09:20 PM)dsimic Wrote: Totally agreed.  Quite frankly, when it comes to automating the disconnect, it would be actually easier to build a robotic arm that would physically plug the device in and out of a USB port instead of a human. Smile

Yeah this is what I also thought... USB is ultra sensitive to data line capacitance...

(04-01-2021, 09:20 PM)dsimic Wrote: Umh, I'm not sure about that.  Even simple Micro SD cards are rather out of control when it comes to knowing that a write operation reported as successful has actually reached the underlying flash memory, which would make it possible to randomly cut the power or yank the card and lose no data.  Even rebooting the host can be problematic, because in many cases the reboot must include power cycling of the Micro SD card.  If the host uses the card at higher speeds and 1.8 V, the card must be power cycled to become initializable and usable by the boot loader after the reboot.

I think there are some things we can explore here. It may be difficult, but I believe we should be able to make some guarantees about the data, even if that means immediately ready it back.

(04-01-2021, 09:20 PM)dsimic Wrote: Letting "smart" flash do its job for a certain period of time (with an additional supercap or anything else) and hoping that data actually reaches the flash still doesn't provide the required level of data reliability.  It would be a hit-and-miss approach, which would only provide a false sense of reliability.  To have reliable storage, in our case we need underlying flash that is dumb, in the sense of being raw and having no integrated logic.  In the case of raw flash, having a write operation that is reported back as successful actually means reaching the flash with the written data, which is what we need.

Not entirely true... Having it come back with 'success' just means that it "believes" it wrote something, but of course the memory cells are most vulnerable when their state is being changed and you could get electron leakage that flips the bit.

(04-01-2021, 09:20 PM)dsimic Wrote: Having ZFS as an example, one of its requirements to provide data reliability is to have nothing between the filesystem and the actual drives.  In other words, no hardware RAID controllers (although there are actually no hardware RAID controllers) may be used, only raw drives.  Sure, drives also provide write caching on their own, which is a potential issue, but they can also reliably be told to flush the caches or to comply to a write barrier.  It took years to flesh out all kinds of bugs related to the management of write caches on HDDs.

ZFS was just an example of how software itself can be more robust. We'll have to think a lot more about exactly how to handle these edge cases and I really suspect hardware limitations will mostly force our hand. As long as the result is predictable, I think this is already pretty great. I've seen a device that attempt to write data on power down, the main-micro begins to crash before the flash does (this flash operated at lower voltages) and then it wrote semi-random data to a random address - not fun!

On the plus side, our job should be a little bit easier than that of spinning disks (fortunately). Read/write time should be constant and predictable, so it's just a case of a suitable algorithm - and much robust testing.

On a completely different note, I was thinking of some fun way to 'brand' the device. If we do stick with 'PineFlash', perhaps we could reference either Flash Gordon: https://www.imdb.com/title/tt0080745/ or The Flash: https://www.imdb.com/title/tt3107288/ ? I think both could be fun - the first is the savior of the Universe, the second is a super fast hero Tongue Either way, I'm feeling a red PCB with exposed copper could look epic! https://i.pinimg.com/originals/25/95/36/...8e2be8.jpg
  Reply
#63
(04-01-2021, 10:02 PM)barray Wrote: cgit looks cool, will have to check that out. I don't think it really matters all too much though and bare in mind, if our project suddenly gets referenced to by say Reddit and the server experiences a massive load, somewhere like GitLab would be able to handle it whereas our servers would be crushed. I understand the want to control the entire stack, but ultimately the decentralized nature of git means we'll have our own copies of the code in the worst case anyway.

Too high load on the server would actually be a good problem to have.  I wouldn't worry too much about it for now, though.

(04-01-2021, 10:02 PM)barray Wrote: Apparently nobody else really has any idea where to put it either. Their main concern if I understand correctly is that people believe it is some official Pine product ready to go into production. Normally the discussions we're having happen behind closed doors at pine when they are first designing out the hardware anyway. This would be an entirely new process for them.

That's an accurate description of the issue at hand.  Maybe we could put the link into the "Community and Support" section on the main wiki page, as an example of the community collaboration and effort?  That might even attract the attention of Pine64.

(04-01-2021, 10:02 PM)barray Wrote:
(04-01-2021, 09:20 PM)dsimic Wrote: Umh, I'm not sure about that.  Even simple Micro SD cards are rather out of control when it comes to knowing that a write operation reported as successful has actually reached the underlying flash memory, which would make it possible to randomly cut the power or yank the card and lose no data.  Even rebooting the host can be problematic, because in many cases the reboot must include power cycling of the Micro SD card.  If the host uses the card at higher speeds and 1.8 V, the card must be power cycled to become initializable and usable by the boot loader after the reboot.

I think there are some things we can explore here. It may be difficult, but I believe we should be able to make some guarantees about the data, even if that means immediately ready it back.

Meh, not even doing a repeated read from "smart" flash would guarantee anything, because the embedded flash controller could simply return the buffered data that is still in transit to the underlying flash memory. 

Some industrial full-size SD cards mitigate the issue by including supercaps that provide the emergency power required to actually complete all write operations that were acknowledged as successful, which is the same approach as in enterprise SSDs.  However, there's simply no space for such supercaps in Micro SD cards or SPI flash chips, which makes them inherently unreliable.

(04-01-2021, 10:02 PM)barray Wrote:
(04-01-2021, 09:20 PM)dsimic Wrote: Letting "smart" flash do its job for a certain period of time (with an additional supercap or anything else) and hoping that data actually reaches the flash still doesn't provide the required level of data reliability.  It would be a hit-and-miss approach, which would only provide a false sense of reliability.  To have reliable storage, in our case we need underlying flash that is dumb, in the sense of being raw and having no integrated logic.  In the case of raw flash, having a write operation that is reported back as successful actually means reaching the flash with the written data, which is what we need.

Not entirely true... Having it come back with 'success' just means that it "believes" it wrote something, but of course the memory cells are most vulnerable when their state is being changed and you could get electron leakage that flips the bit.

The leakage you mentioned actually belongs to a whole other category of issues, usually called bit rot or silent data corruption.  It is actually unrelated to the acknowledged writes that end up not reaching the underlying flash memory.  I'll try to explain this further.

There are three "players" in this game: the flash memory, the embedded flash controller, and the host.  When the flash "lies" to the controller, by experiencing the effects of cosmic rays or a defective cell, it is the silent data corruption.  In this case, the flash actually doesn't "lie", it's instead simply unaware of the changes to the data.  When the controller "lies" to the host, as a result of performing the wear leveling or as just an attempt to improve performance, it is the write that hasn't actually reached the flash memory.  In this case, the controller really "lies", by being fully aware of the data not reaching the underlying flash.

Eliminating the controller (a true "liar") from the above-described equation leaves the host to deal with low-level flash issues, but eliminates the possibility of having incomplete writes treated as complete.  Of course, the possibility for silent data corruption still remains, as a constraint of the flash memory technology, which can be mitigated using additional ECC data.

(04-01-2021, 10:02 PM)barray Wrote: On a completely different note, I was thinking of some fun way to 'brand' the device. If we do stick with 'PineFlash', perhaps we could reference either Flash Gordon: https://www.imdb.com/title/tt0080745/ or The Flash: https://www.imdb.com/title/tt3107288/ ? I think both could be fun - the first is the savior of the Universe, the second is a super fast hero Tongue Either way, I'm feeling a red PCB with exposed copper could look epic! https://i.pinimg.com/originals/25/95/36/...8e2be8.jpg

It would be fun, but receiving a copyright infrigement notice would be much less fun. Smile
  Reply
#64
I've added SMART to the Wiki's suggested features, (with a probable pre-req of UASP).


In regards to the copy on write, what I meant was that the flash is not used as simple addressable blocks. Since we have wear leveling, and spares, we have to have an internal "directory" which when I ask for from  block 357 it gives me logical block 357, and not the physical block 357 which might be for something else.

My thought is to make sure that this internal "directory" of logical blocks from the host, to physical blocks in flash, is kept consistent. Even if we have to re-write the "directory", (with version number so we know which is more current), into another place. So that the old "directory" is good, UNTIL the new "directory" is completely written.

Perhaps taking a feature from ZFS and having 2 copies of the "directory" at all times may help avoid corruption.

Note: All metadata on ZFS has at least 2 copies, even on a single disk.


Last, many software & hardware development projects include test jigs. We may want to create a USB test jig that will remove power. Perhaps by relay to avoid the capacitance issue.  Or perhaps use an existing USB controlled relay with a special USB to USB adapter that cuts power.

Here are a few USB controlled relays I found cheaply;
https://www.amazon.com/NOYITO-1-Channel-...B07C3LPH3X
https://www.amazon.com/SMAKN-LCUS-1-modu...B01CN7E0RQ

That first one has additional models that have 2 or 4 which could in theory cut the other lines too. (Though, I don't know about capacitance issues...)
--
Arwen Evenstar
Princess of Rivendale
  Reply
#65
(04-01-2021, 11:01 PM)dsimic Wrote: Too high load on the server would actually be a good problem to have.  I wouldn't worry too much about it for now, though.

I did manage to get cgit up and running, was a little bit of a pain. Seems like it's not quite yet mature for small servers hosting large repositories, but should be good enough for our purpose.

(04-01-2021, 11:01 PM)dsimic Wrote: That's an accurate description of the issue at hand.  Maybe we could put the link into the "Community and Support" section on the main wiki page, as an example of the community collaboration and effort?  That might even attract the attention of Pine64.

I was essentially told that anything put on the main page would be removed. I think we would need to ask before adding something, otherwise some wiki admin will just revert it again.

(04-01-2021, 11:01 PM)dsimic Wrote: Meh, not even doing a repeated read from "smart" flash would guarantee anything, because the embedded flash controller could simply return the buffered data that is still in transit to the underlying flash memory. 

Some industrial full-size SD cards mitigate the issue by including supercaps that provide the emergency power required to actually complete all write operations that were acknowledged as successful, which is the same approach as in enterprise SSDs.  However, there's simply no space for such supercaps in Micro SD cards or SPI flash chips, which makes them inherently unreliable.

I believe that between the filesystem and the flash control, there should be something that can be done in this space. But I guess this can also be part of the experimentation anyway.

(04-01-2021, 11:01 PM)dsimic Wrote: The leakage you mentioned actually belongs to a whole other category of issues, usually called bit rot or silent data corruption.  It is actually unrelated to the acknowledged writes that end up not reaching the underlying flash memory.  I'll try to explain this further.

There are three "players" in this game: the flash memory, the embedded flash controller, and the host.  When the flash "lies" to the controller, by experiencing the effects of cosmic rays or a defective cell, it is the silent data corruption.  In this case, the flash actually doesn't "lie", it's instead simply unaware of the changes to the data.  When the controller "lies" to the host, as a result of performing the wear leveling or as just an attempt to improve performance, it is the write that hasn't actually reached the flash memory.  In this case, the controller really "lies", by being fully aware of the data not reaching the underlying flash.

Eliminating the controller (a true "liar") from the above-described equation leaves the host to deal with low-level flash issues, but eliminates the possibility of having incomplete writes treated as complete.  Of course, the possibility for silent data corruption still remains, as a constraint of the flash memory technology, which can be mitigated using additional ECC data.

I'll have to look at this further. By the way, the chips linked in the wiki-page are automotive grade, so they have ECC in them too.

(04-01-2021, 11:01 PM)dsimic Wrote: It would be fun, but receiving a copyright infrigement notice would be much less fun. Smile

Inspired of course - not a direct clone. Also the PCB would be entirely hidden in the final casing for the device.

(04-03-2021, 03:00 PM)Arwen Wrote: I've added SMART to the Wiki's suggested features, (with a probable pre-req of UASP).

I saw, looks good Smile

(04-03-2021, 03:00 PM)Arwen Wrote: In regards to the copy on write, what I meant was that the flash is not used as simple addressable blocks. Since we have wear leveling, and spares, we have to have an internal "directory" which when I ask for from block 357 it gives me logical block 357, and not the physical block 357 which might be for something else.

My thought is to make sure that this internal "directory" of logical blocks from the host, to physical blocks in flash, is kept consistent. Even if we have to re-write the "directory", (with version number so we know which is more current), into another place. So that the old "directory" is good, UNTIL the new "directory" is completely written.

Perhaps taking a feature from ZFS and having 2 copies of the "directory" at all times may help avoid corruption.

Note: All metadata on ZFS has at least 2 copies, even on a single disk.

I think I get what you mean - but this is wear leveling, not a filesystem?

(04-03-2021, 03:00 PM)Arwen Wrote: Last, many software & hardware development projects include test jigs. We may want to create a USB test jig that will remove power. Perhaps by relay to avoid the capacitance issue. Or perhaps use an existing USB controlled relay with a special USB to USB adapter that cuts power.

Here are a few USB controlled relays I found cheaply;
https://www.amazon.com/NOYITO-1-Channel-...B07C3LPH3X
-LCUS-1-module-intelligent-control/dp/B01CN7E0RQ" target="_blank" rel="noopener" class="mycode_url">https://www.amazon.com/SMAKN-LCUS-1-mod...B01CN7E0RQ

That first one has additional models that have 2 or 4 which could in theory cut the other lines too. (Though, I don't know about capacitance issues...)

Those seem to be USB controlled relays, not relays to control USB? The problem with having one of those on each of the lines is the capacitance and resistance they add. Honestly the complexity overhead is not worth it - designing this test rig could end up as large a task as the USB device itself.
  Reply
#66
(04-03-2021, 03:00 PM)Arwen Wrote: I've added SMART to the Wiki's suggested features, (with a probable pre-req of UASP).

SMART would be one of the must-have features, simply because we don't want to make another black box.  The internal statuses of the device should be made available to the host (and to the end-users) as much as possible, using already existing mechanisms such as SMART.

(04-03-2021, 03:00 PM)Arwen Wrote: In regards to the copy on write, what I meant was that the flash is not used as simple addressable blocks. Since we have wear leveling, and spares, we have to have an internal "directory" which when I ask for from  block 357 it gives me logical block 357, and not the physical block 357 which might be for something else.

My thought is to make sure that this internal "directory" of logical blocks from the host, to physical blocks in flash, is kept consistent. Even if we have to re-write the "directory", (with version number so we know which is more current), into another place. So that the old "directory" is good, UNTIL the new "directory" is completely written.

This is just another confirmation of the need to use raw flash, instead of using "managed" flash.  Only raw flash can be told exactly what to do and when, instead of having the embedded flash microcontroller doing things nondeterministically on its own.  Using raw flash would make it possible to use the inherent CoW nature of flash to provide higher-level CoW, for the stored data.

In other words, combining the CoW nature of flash and our own wear levelling algorithms would make it possible to ensure guaranteed data reliability, and even to provide the ability for the host to define write barriers, which would trickle the data reliability up to the OS and filesystem level.  That would be awesome! Cool

(04-03-2021, 03:00 PM)Arwen Wrote: Last, many software & hardware development projects include test jigs. We may want to create a USB test jig that will remove power. Perhaps by relay to avoid the capacitance issue.  Or perhaps use an existing USB controlled relay with a special USB to USB adapter that cuts power.

As already described, cutting power to a USB device is doable on recent PC motherboards, so no additional hardware or a dedicated testbed would be required.  Though, that would cover just cutting the power to a USB device, instead of simulating the unplugging of a device as a whole.  However, having the ability to cut power on demand should be good enough for a while.

(04-05-2021, 07:47 PM)barray Wrote: I was essentially told that anything put on the main page would be removed. I think we would need to ask before adding something, otherwise some wiki admin will just revert it again.

I've spent some more time tidying up the main wiki page, and this edit introduced a link to the PineFlash page.

(04-05-2021, 07:47 PM)barray Wrote: I believe that between the filesystem and the flash control, there should be something that can be done in this space. But I guess this can also be part of the experimentation anyway.

Unfortunately, there's no "magic" there to rely upon, only the simple but fundamental rules to follow.  As an example, you can find quite a few descriptions of the real-word issues with using microSD cards in embedded systems, caused by their nondeterministic nature.

(04-05-2021, 07:47 PM)barray Wrote:
(04-01-2021, 11:01 PM)dsimic Wrote: Eliminating the controller (a true "liar") from the above-described equation leaves the host to deal with low-level flash issues, but eliminates the possibility of having incomplete writes treated as complete.  Of course, the possibility for silent data corruption still remains, as a constraint of the flash memory technology, which can be mitigated using additional ECC data.

I'll have to look at this further. By the way, the chips linked in the wiki-page are automotive grade, so they have ECC in them too.

Using flash chips that provide the ECC functionality internally would be good, as long as it's still raw flash.

(04-05-2021, 07:47 PM)barray Wrote: Inspired of course - not a direct clone. Also the PCB would be entirely hidden in the final casing for the device.

I see no possible copyright issues with the color of the PCB, and the association with Flash Gordon might be mentioned somewhere unofficially, as a funny note. Smile  However, please keep in mind that PCB manufacturers charge much more for red PCBs than the green ones.
  Reply
#67
(04-07-2021, 12:52 PM)dsimic Wrote: This is just another confirmation of the need to use raw flash, instead of using "managed" flash.  Only raw flash can be told exactly what to do and when, instead of having the embedded flash microcontroller doing things nondeterministically on its own.  Using raw flash would make it possible to use the inherent CoW nature of flash to provide higher-level CoW, for the stored data.

In other words, combining the CoW nature of flash and our own wear levelling algorithms would make it possible to ensure guaranteed data reliability, and even to provide the ability for the host to define write barriers, which would trickle the data reliability up to the OS and filesystem level.  That would be awesome! Cool

Eh, one problem at a time - for now I'll settle for writing data unreliably. We need to start the process of creating a prototype, otherwise this project will never get off the ground. The way I see it, the milestones are:

1. Block diagram - How will this be roughly put together? (I think we are mostly there in terms of written ideas, but I think a diagram will say more than words)

2. BOM - Exactly what chips will we use and how much will this cost? Are the specs up to scratch to give us a viable path forwards?

3. Schematic - What gets connected and where. (I'm currently experimenting with another project to do this programmatically.)

4. Layout - Design the PCB to be made.

5. Prototype manufacture - Lets get the prototype built!

(04-07-2021, 12:52 PM)dsimic Wrote: As already described, cutting power to a USB device is doable on recent PC motherboards, so no additional hardware or a dedicated testbed would be required.  Though, that would cover just cutting the power to a USB device, instead of simulating the unplugging of a device as a whole.  However, having the ability to cut power on demand should be good enough for a while.

It would 100% be better if we could do this using some Pine hardware, like the A64 LTS board.

(04-07-2021, 12:52 PM)dsimic Wrote: I've spent some more time tidying up the main wiki page, and this edit introduced a link to the PineFlash page.

Good stuff Smile

(04-07-2021, 12:52 PM)dsimic Wrote: Unfortunately, there's no "magic" there to rely upon, only the simple but fundamental rules to follow.  As an example, you can find quite a few descriptions of the real-word issues with using microSD cards in embedded systems, caused by their nondeterministic nature.

Using flash chips that provide the ECC functionality internally would be good, as long as it's still raw flash.

I don't know if you checked the memory I linked to in the wiki, but it's really not as raw as you're thinking. I am up for using raw flash but would want to understand exactly what the memory controller is actually doing. For example, given the number of pins on raw flash, I am not even entirely sure it can be controlled with just QSPI. If that's so, it means we'll need alot more complex controller, something we will get zero support for in the community.

If on the other hand we stuck with the automotive NAND flash, we get ECC, QSPI, etc, of course, at a cost. It's also tonnes easier to interface with.

The other option is potentially we link tonnes of eMMc memory together, although again I would need to understand exactly how that works - it feels like we lose control again.

(04-07-2021, 12:52 PM)dsimic Wrote: I see no possible copyright issues with the color of the PCB, and the association with Flash Gordon might be mentioned somewhere unofficially, as a funny note. Smile  However, please keep in mind that PCB manufacturers charge much more for red PCBs than the green ones.

I think if you're producing PCBs en mass it doesn't matter too much. Back when I used to be involved in the automotive side of things we would purposely make the prototype boards a different colour anyway, to visually see they were different from one another. When you are doing low-run production, the colour of the PCB is the least of your worries. I know these days many people like to get their PCBs made in black (probably to hide the burn marks from soldering).
  Reply
#68
(04-11-2021, 10:52 AM)barray Wrote: Eh, one problem at a time - for now I'll settle for writing data unreliably.

Absolutely.  It's just that we need to keep the big picture in mind, and make the right decisions and choices at the important points in the development.

(04-11-2021, 10:52 AM)barray Wrote: We need to start the process of creating a prototype, otherwise this project will never get off the ground. The way I see it, the milestones are:

1. Block diagram - How will this be roughly put together? (I think we are mostly there in terms of written ideas, but I think a diagram will say more than words)
2. BOM - Exactly what chips will we use and how much will this cost? Are the specs up to scratch to give us a viable path forwards?
3. Schematic - What gets connected and where. (I'm currently experimenting with another project to do this programmatically.)
4. Layout - Design the PCB to be made.
5. Prototype manufacture - Lets get the prototype built!

You've laid it out very well, but the second point is the blocker.  We have to select the main chip first, but I'm really not sure which way to go.  Our direction so far has been to use the BL602, which seems rather fine, but it's still a can of worms that would surely come with more than a few issues, unknowns and unforeseen constraints.  We should also keep in mind that a BL602-based storage device would be rather slow, and at this point in time I'm pretty sure that we'll, unfortunately, end up getting no support from Pine64. Sad

Also, please have a look at the STM32 H7 microcontrollers.  The specs look very good, there's support for SDR/DDR quad-SPI interface (up to 256 MB), two 1-/4-/8-bit SD/MMC SDR104/HS200 interfaces (hmm, RAID1 across two microSD cards or, even better, two eMMC modules?), high-speed (480 Mbit/s) USB 2.0 device interface, and even an Ethernet MAC (100 Mbit/s only, though), but where would we start from the software side?  Though, a complete and very detailed reference manual is available and the GNU toolchain for Cortex-M7 is readily available.

(04-11-2021, 10:52 AM)barray Wrote:
(04-07-2021, 12:52 PM)dsimic Wrote: As already described, cutting power to a USB device is doable on recent PC motherboards, so no additional hardware or a dedicated testbed would be required.  Though, that would cover just cutting the power to a USB device, instead of simulating the unplugging of a device as a whole.  However, having the ability to cut power on demand should be good enough for a while.

It would 100% be better if we could do this using some Pine hardware, like the A64 LTS board.

Not a problem. Cool  According to the Rock64 v3 schematic, it is possible to control (i.e. cut) the power supply to its USB ports, using one of the GPIO pins of the SoC.  I'd also suggest that we use Rock64, because it's a more powerful SBC than Pine A64-LTS, and has a USB 3.0 port as well.

(04-11-2021, 10:52 AM)barray Wrote: I don't know if you checked the memory I linked to in the wiki, but it's really not as raw as you're thinking. I am up for using raw flash but would want to understand exactly what the memory controller is actually doing. For example, given the number of pins on raw flash, I am not even entirely sure it can be controlled with just QSPI. If that's so, it means we'll need alot more complex controller, something we will get zero support for in the community.

Quite frankly, somehow I missed that link, but now I had a look at the W25N datasheet.  As far as I can see, the W25N has on-chip ECC and management of bad blocks, but doesn't do wear leveling on its own.  Based on that, all acknowledged write operations should actually reach the flash.  Thus, I'd say that it strikes a good balance.

(04-11-2021, 10:52 AM)barray Wrote: The other option is potentially we link tonnes of eMMc memory together, although again I would need to understand exactly how that works - it feels like we lose control again.

I haven't researched eMMC storage in detail yet, but I'd say that it might be treated as more reliable than microSD cards, for example.  Some information is available in this PDF file, but in general not much freely available information is floating around.  This PDF file provides some more information, including a description of the power-off procedure, cache flushing, and cache barriers.  These three features are critical for any storage device that aims to store data reliably.

(04-11-2021, 10:52 AM)barray Wrote: I think if you're producing PCBs en mass it doesn't matter too much. Back when I used to be involved in the automotive side of things we would purposely make the prototype boards a different colour anyway, to visually see they were different from one another. When you are doing low-run production, the colour of the PCB is the least of your worries.

It's very easily possible that the price difference becomes negligible in larger quantities.  My remark was based on a limited insight into the prices quoted by JLCPCB for different PCB colors.
  Reply
#69
Ran across the following USB storage device. It apparently has many of the features we were looking for: Managability, reliability, SMART, etc...

Here is a link. Select USB and search;
https://www.virtium.com/products/industr...-selector/

They seem to cost a small fortune, even in quantity.

So, if "ours" is truly manageable, reliable and has SMART, (potentially adding RAID & encryption), in theory it would be worth more than the bottom of the barrel competition.
--
Arwen Evenstar
Princess of Rivendale
  Reply
#70
Nice find.  They seem to charge about 40 EUR for a single 32 GB USB flash drive?  We might be able to match that price, but it would depend on the size of the initial batch.
  Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Rackmount cluster case as a Pine Store product? dfr 3 530 09-30-2021, 04:52 PM
Last Post: poVoq
  E-Note Device (E-Ink, E-Paper, Project Idea) Sirius 9 2,909 08-18-2021, 08:28 AM
Last Post: biketool
  New product idea: Pine Glasses Blathers 1 515 07-28-2021, 04:08 AM
Last Post: barray
  PineVR as a new product? poVoq 11 4,801 05-31-2021, 09:33 AM
Last Post: MirceaKitsune
Lightbulb Product Idea: Pine Graphics Tablet israel 7 1,769 04-17-2021, 08:13 AM
Last Post: israel
  Product Idea : PineProbe fdlamotte 3 1,340 04-07-2021, 06:19 AM
Last Post: fdlamotte
  Product Hopes for new Rockchip series TailorHouse 11 3,697 03-14-2021, 01:08 PM
Last Post: dsimic
  [PRODUCT IDEA] PineCalc Computer semi-expert 0 495 02-23-2021, 06:00 PM
Last Post: Computer semi-expert
  Pine/ARM mATX motherboard product? ashleymills 8 5,260 08-25-2020, 04:31 PM
Last Post: derekn
Exclamation Idea, smart display, and TV box? mzs.112000 2 1,966 01-06-2020, 05:20 AM
Last Post: Danct12

Forum Jump:


Users browsing this thread: 2 Guest(s)