Ah. So…blame the victim. Cause apparently capitalism is, like, perfect or something.
The company selling the software arbitrarily created a problem for no reason other than greed. And yet, the ones not forking over more money are the problem.
Yeah, hard no from me on your entire argument, buddy.
Obviously the company is the bad guy here. But if the research data is so important, the lab should try to solve their problem instead of just praying that the 20 year old machine won’t fail.
I didn’t say capitalism is perfect nor did I imply it.
So hypothetically let’s say the vendor lost the rights to the software since it is abandonware – great. I’d love it.
What changes for justmeremember’s situation? Nothing changes.
I suppose your only issue here is that the software vendor or some entity should support it forever. OK, so why didn’t they just choose a FOSS alternative or make one themselves? If not then, why not now? There is nothing that stops them from the latter other than time and effort. Even better, everyone else could benefit!
Does that make justmeremember just as culpable here or are they still the victim with no reasonable way to a solution?
I posted simply because this specific issue is much too common and also just as common is the failure to actually solve it regardless of the abandonware argument instead of stop-gapping and kicking it down the line until access to the data is gone forever.
Because they’re a science research lab not a computer programming lab? Maybe I’m misunderstanding what you’re saying but they’re not the right people, nor in the right situation to be solving this problem.
It isn’t necessarily a computer programming problem either. Rather it is an IT problem at least in part, one that the poster states is the primary job of his ‘lab guy’ – to maintain two ancient Windows 95 computers specifically. That person must know enough to sustain the troubleshooting and replacement of the hardware and certainly at least the transfer of data from the own spinning hard drives. Why not instead put that technical expertise into actually solving the problem long-term? Why not just run both in qemu and use hardware passthru if required? At least then, you would rid yourself of the ticking time-bomb of hardware and its diminishing availability. That RAM that is no longer made isn’t going to last forever. They don’t even need to know much about how it all works. There are guides, even for Windows 95 available.
Perhaps there are other hurdles such as running something on ISA but even so, eventually it isn’t going to matter. Primarily, it seems rather the hurdle is specifically the software and the data it facilitates though. Does it really have some sort of ancient hardware dependency? Maybe. But in all that time of this ‘lab guy’ who’s main role is just these two machines must have some time to experiment and figure this out. The data must be copyable, even as a straight hard drive image even if it isn’t a flat file (extremely doubtful but it doesn’t matter). I mean the data is by the author’s own emphasis CRITICAL.
If it is CRITICAL then why don’t they give it that priority, even to the lone ‘lab guy’ that’s acting IT?
Unless there’s some big edge case here that just isn’t simply said and there is something above and beyond simply just the software they speak about, I feel like I’ve put more effort into typing these responses than it would take to effectively solve the hardware on life support side of it. Solving the software dependency side? Depending on how the datasets are logically stored it may require a software developer but it also may not. However, simply virtualizing the environment would solve many, if not all, of these problems with minimal investment, especially to CRITICAL (their emphasis) data with ~20 years to figure it out. It would simply be a new computer and some sort of media to install Linux or *BSD on and perhaps a COTS converter if it is using something like an LPT interface or even a DB9/DE-9 D-Sub (though you can still find modern motherboards, cards or even laptops capable of supporting those but also certainly a cheap USB adapter as well).
Anyway, I’m just going to leave it at that, I think I’ve said a lot on the subject to numerous people and do not have much more to add other than this is most likely solvable and outside of severe edge cases, solvable without expert knowledge considering the timeframe.
I suppose your only issue here is that the software vendor or some entity should support it forever.
If no entity wants to take on support, they should be forced to release the source code to the Public Domain. Copyright is a social contract, not an entitlement – if you don’t hold up your end of the bargain of keeping it available, you deserve to lose it.
Well, I think a better solution would be to deliver all source code with the compiled software as well. I suppose that would extend to the operating system itself and the hope that there’d be enough motivation for skillful folks to maintain that OS and support for new hardware. Great, that would indeed solve the problem and is a potential outcome if digital rights are overhauled. This is something I fully support.
What is stopping them now from solving access to this data, even if it’s in a proprietary format?
Really, again, I don’t take issue with the abandonware argument but rather with the situation that I posted itself. Source code availability and the rights surrounding are only one part of the larger problem in the post.
Source code and the rights to it, aren’t the root cause of the problem in the post that I was regarding. It could facilitate a solution, sure but given that there is at least ~20 years of data at risk currently, there was also ~20 years of potential labor hours to solve it. Yet, instead, they chose to ‘solve’ it in a terrible way. That is what I take issue with.
This is really not a problem that’s fixed by open source.
The microscope will be controlled by a card that only plugs into 30 year old desktops. If you open source the drivers for it this only gives you the source code to drivers for Windows 95. These drivers will be incredibly hacky and hard coded and probably die if you install a service pack.
Having access to the source code doesn’t let you replace the entire stack because you’re still physically tied to old hardware, that is worse than a raspberry pi and even just making sure that you can update Windows is a feat of engineering.
At the very least, being able to read the source code gives you a Hell of a head start on writing a new driver for an appropriate OS (and by that I mean Linux, obviously). Saves a whole reverse-engineering step.
I’m not saying creating an entire project to adapt the controller and software stack to modern systems would be cheap or easy, but it’s possible – and more to the point, seemingly less expensive than buying the new microscope for “hundreds of thousands of €” (especially in the long run, since the company is likely to pull the same shit over and over again), even if you’ve got to pay a gaggle of comp-e grad students to put it together for you.
In a GxP environment with bespoke pharmaceutical equipment you are spending anywhere from 1-4000 collective labour hours and anywhere from 50k-250k for a control system upgrade, URS/TRS/SDS, Code risk assessment and review, and Qualification. To give you an idea, on a therapeutic manufacturing plant you’re looking at a handful of two inch binders for the end to end system.
You are also (and more importantly) taking your resources off BAU or revenue generating improvement work for this project. You have a validated and qualified system, and even if you are spending $10-20k for a $500 like for like IPC or control card, the cost benefits of another 5 years is worth it.
If your equipment is a medical device, such as a diagnostic microscope, add another few binders of paperwork and regulator sign off. There’s a reason the equipment is so expensive
If you get into the food industry, or general manufacturing the barriers to upgrade are much less. For your machine shop running floppy disks, it’s a case of the external cost would approach the cost of a new machine, and the existing machine is fine.
As a maintenance professional this is the sort of risk management we conduct on an ongoing basis.
Ah. So…blame the victim. Cause apparently capitalism is, like, perfect or something.
The company selling the software arbitrarily created a problem for no reason other than greed. And yet, the ones not forking over more money are the problem.
Yeah, hard no from me on your entire argument, buddy.
Obviously the company is the bad guy here. But if the research data is so important, the lab should try to solve their problem instead of just praying that the 20 year old machine won’t fail.
I didn’t say capitalism is perfect nor did I imply it.
So hypothetically let’s say the vendor lost the rights to the software since it is abandonware – great. I’d love it.
What changes for justmeremember’s situation? Nothing changes.
I suppose your only issue here is that the software vendor or some entity should support it forever. OK, so why didn’t they just choose a FOSS alternative or make one themselves? If not then, why not now? There is nothing that stops them from the latter other than time and effort. Even better, everyone else could benefit!
Does that make justmeremember just as culpable here or are they still the victim with no reasonable way to a solution?
I posted simply because this specific issue is much too common and also just as common is the failure to actually solve it regardless of the abandonware argument instead of stop-gapping and kicking it down the line until access to the data is gone forever.
Because they’re a science research lab not a computer programming lab? Maybe I’m misunderstanding what you’re saying but they’re not the right people, nor in the right situation to be solving this problem.
It isn’t necessarily a computer programming problem either. Rather it is an IT problem at least in part, one that the poster states is the primary job of his ‘lab guy’ – to maintain two ancient Windows 95 computers specifically. That person must know enough to sustain the troubleshooting and replacement of the hardware and certainly at least the transfer of data from the own spinning hard drives. Why not instead put that technical expertise into actually solving the problem long-term? Why not just run both in qemu and use hardware passthru if required? At least then, you would rid yourself of the ticking time-bomb of hardware and its diminishing availability. That RAM that is no longer made isn’t going to last forever. They don’t even need to know much about how it all works. There are guides, even for Windows 95 available.
Perhaps there are other hurdles such as running something on ISA but even so, eventually it isn’t going to matter. Primarily, it seems rather the hurdle is specifically the software and the data it facilitates though. Does it really have some sort of ancient hardware dependency? Maybe. But in all that time of this ‘lab guy’ who’s main role is just these two machines must have some time to experiment and figure this out. The data must be copyable, even as a straight hard drive image even if it isn’t a flat file (extremely doubtful but it doesn’t matter). I mean the data is by the author’s own emphasis CRITICAL.
If it is CRITICAL then why don’t they give it that priority, even to the lone ‘lab guy’ that’s acting IT?
Unless there’s some big edge case here that just isn’t simply said and there is something above and beyond simply just the software they speak about, I feel like I’ve put more effort into typing these responses than it would take to effectively solve the hardware on life support side of it. Solving the software dependency side? Depending on how the datasets are logically stored it may require a software developer but it also may not. However, simply virtualizing the environment would solve many, if not all, of these problems with minimal investment, especially to CRITICAL (their emphasis) data with ~20 years to figure it out. It would simply be a new computer and some sort of media to install Linux or *BSD on and perhaps a COTS converter if it is using something like an LPT interface or even a DB9/DE-9 D-Sub (though you can still find modern motherboards, cards or even laptops capable of supporting those but also certainly a cheap USB adapter as well).
Anyway, I’m just going to leave it at that, I think I’ve said a lot on the subject to numerous people and do not have much more to add other than this is most likely solvable and outside of severe edge cases, solvable without expert knowledge considering the timeframe.
If no entity wants to take on support, they should be forced to release the source code to the Public Domain. Copyright is a social contract, not an entitlement – if you don’t hold up your end of the bargain of keeping it available, you deserve to lose it.
Well, I think a better solution would be to deliver all source code with the compiled software as well. I suppose that would extend to the operating system itself and the hope that there’d be enough motivation for skillful folks to maintain that OS and support for new hardware. Great, that would indeed solve the problem and is a potential outcome if digital rights are overhauled. This is something I fully support.
What is stopping them now from solving access to this data, even if it’s in a proprietary format?
Really, again, I don’t take issue with the abandonware argument but rather with the situation that I posted itself. Source code availability and the rights surrounding are only one part of the larger problem in the post.
Source code and the rights to it, aren’t the root cause of the problem in the post that I was regarding. It could facilitate a solution, sure but given that there is at least ~20 years of data at risk currently, there was also ~20 years of potential labor hours to solve it. Yet, instead, they chose to ‘solve’ it in a terrible way. That is what I take issue with.
This is really not a problem that’s fixed by open source.
The microscope will be controlled by a card that only plugs into 30 year old desktops. If you open source the drivers for it this only gives you the source code to drivers for Windows 95. These drivers will be incredibly hacky and hard coded and probably die if you install a service pack.
Having access to the source code doesn’t let you replace the entire stack because you’re still physically tied to old hardware, that is worse than a raspberry pi and even just making sure that you can update Windows is a feat of engineering.
At the very least, being able to read the source code gives you a Hell of a head start on writing a new driver for an appropriate OS (and by that I mean Linux, obviously). Saves a whole reverse-engineering step.
Also, the “a card that only plugs into 30 year old desktops” thing isn’t quite as insurmountable as you think.
I’m not saying creating an entire project to adapt the controller and software stack to modern systems would be cheap or easy, but it’s possible – and more to the point, seemingly less expensive than buying the new microscope for “hundreds of thousands of €” (especially in the long run, since the company is likely to pull the same shit over and over again), even if you’ve got to pay a gaggle of comp-e grad students to put it together for you.
I mean the most upvoted answer in your link says it often is that insurmountable.
Basically, it’s a huge gamble and a substantial software engineering effort even when you know what you’re doing and source code is available.
It’s not surprising that biologists keep using old machines until they die.
In a GxP environment with bespoke pharmaceutical equipment you are spending anywhere from 1-4000 collective labour hours and anywhere from 50k-250k for a control system upgrade, URS/TRS/SDS, Code risk assessment and review, and Qualification. To give you an idea, on a therapeutic manufacturing plant you’re looking at a handful of two inch binders for the end to end system.
You are also (and more importantly) taking your resources off BAU or revenue generating improvement work for this project. You have a validated and qualified system, and even if you are spending $10-20k for a $500 like for like IPC or control card, the cost benefits of another 5 years is worth it.
If your equipment is a medical device, such as a diagnostic microscope, add another few binders of paperwork and regulator sign off. There’s a reason the equipment is so expensive
If you get into the food industry, or general manufacturing the barriers to upgrade are much less. For your machine shop running floppy disks, it’s a case of the external cost would approach the cost of a new machine, and the existing machine is fine.
As a maintenance professional this is the sort of risk management we conduct on an ongoing basis.