|
Post by marcl on Jun 6, 2022 16:34:03 GMT -5
That was pretty common for a while... And I'll bet some folks still do their checkbook that way.
I also remember when people were actually claiming that "there would be less actual paper" once everything was on computers. I agree it was a huge leap. From here on everything will be in some sort of computer. But in the early 90's when draftsmen still did everything on paper, when they finally got Autocad or something like that at the place I worked the guy would still work out whole designs on paper first ... and THEN put it into the computer system. When I worked for IBM a few months in 2004 I was full time WFH. Everybody I worked with was scattered. I had a totally paperless office. I had neither printer, filing cabinet nor trash can. A friend stopped over and asked where my filing cabinet was ... I asked "where would the paper come from? There's no printer, and nobody to hand me paper!" He looked confused. But I had to have a white board. That was just for me, because I needed it to think! p.s. I still sit on that Aeron 18 years later. When I got the offer from IBM and saw the salary, I sprung for the leather arms
|
|
|
Post by Boomzilla on Jun 6, 2022 17:32:38 GMT -5
And remember when it was so cool to "turbo" a PC - so it would run at 8 mHz instead of 4.7 mHz. And when the IBM PC-AT was a big deal because it actually came with a 10 MEGABYTE hard disk. (And even though a single photo from my current camera wouldn't fit on that now... it held every program I owned back then... AND every picture I had.) Actually the first "real" computer I owned was an Apple II+ It came with 16 K of RAM... but it was upgradable... to a whole 64 K.
I started with an Apple III. Got so disgusted with it that I promptly sold it (for a HUGE loss) and got into Commodore 64s.
|
|
|
Post by marcl on Jun 6, 2022 17:58:44 GMT -5
And remember when it was so cool to "turbo" a PC - so it would run at 8 mHz instead of 4.7 mHz. And when the IBM PC-AT was a big deal because it actually came with a 10 MEGABYTE hard disk. (And even though a single photo from my current camera wouldn't fit on that now... it held every program I owned back then... AND every picture I had.) Actually the first "real" computer I owned was an Apple II+ It came with 16 K of RAM... but it was upgradable... to a whole 64 K.
I started with an Apple III. Got so disgusted with it that I promptly sold it (for a HUGE loss) and got into Commodore 64s. Apple IIC ... four years later I worked for the three guys who invented the 64 and left Commodore to form a new company. Interesting story how the 64 was skunk works.
|
|
|
Post by leonski on Jun 7, 2022 1:39:29 GMT -5
It occurs to me that the transition from paper-based engineering to computer-based is one of those HUGE transitions that few previous generations experienced. Maybe the transition from animal power to steam compares, but not much else. Our children will experience a maybe bigger transition (in progress now) to the common use of artificial intelligence. Their children (if we survive) may experience the transition from being an Earth-based-only species to moving on out into the solar system & then the galaxy. OTOH, I’m reminded of Greg Bear’s novel “Blood Music.” As the old folks say it goes to show you never can tell… No such thing as Artificial Intelligence. Expert? Yes. but just try asking such a system about a favorite movie or what's for lunch....... I still believe there is a place for 'The Turing Test'...
|
|
|
Post by Boomzilla on Jun 7, 2022 6:23:20 GMT -5
Artificial intelligence, in the way I'm using the term, is not whether or not a computer can mimic a human being, but rather enabling the computer to make decisions based on its OWN analysis of data (as opposed to having the computer rely on if/then programming alone to make decisions). In other words, if a situation arises that has multiple interpretations, allowing the computer to decide how to interpret the situation and make a response on its own without waiting for human confirmation. This happens already in a lot of military equipment. If a ship's gun sees a high-speed incoming object that isn't identified as friendly, the computer automatically decides what the object might be. Then based, on that, it decides on the best strategy (out of several) to destroy the object and implements that strategy without waiting for human response. Science fiction is rife with AI computers that do not perform as originally desired, but the computers all have in common the autonomy to evaluate situations and respond based on their own analysis. The range of autonomy allowed the computer determines, in my definition, what constitutes AI. Several factors must be present:
Ability to choose how to interpret incoming data from multiple interpretations Ability to implement multiple outcomes based on the interpretation Autonomy from human approval for both the data analysis and outcome implementation
Ultimately, this doesn't fit the formal definition of AI, but it's so close as to be trivial.
|
|
|
Post by gus4emo on Jun 7, 2022 7:20:07 GMT -5
Artificial intelligence, in the way I'm using the term, is not whether or not a computer can mimic a human being, but rather enabling the computer to make decisions based on its OWN analysis of data (as opposed to having the computer rely on if/then programming alone to make decisions). In other words, if a situation arises that has multiple interpretations, allowing the computer to decide how to interpret the situation and make a response on its own without waiting for human confirmation. This happens already in a lot of military equipment. If a ship's gun sees a high-speed incoming object that isn't identified as friendly, the computer automatically decides what the object might be. Then based, on that, it decides on the best strategy (out of several) to destroy the object and implements that strategy without waiting for human response. Science fiction is rife with AI computers that do not perform as originally desired, but the computers all have in common the autonomy to evaluate situations and respond based on their own analysis. The range of autonomy allowed the computer determines, in my definition, what constitutes AI. Several factors must be present: Ability to choose how to interpret incoming data from multiple interpretations Ability to implement multiple outcomes based on the interpretation Autonomy from human approval for both the data analysis and outcome implementation Ultimately, this doesn't fit the formal definition of AI, but it's so close as to be trivial. AI can make humanity better or worse, depending on how it is used, we don't want the technology to fall in the wrong hands, AI was and is still being developed by good people, a very simple example is when we make a call and a computer answers, then we have choices....of course it can be annoying when I say what I want and it keeps asking the same question, lol....
|
|
|
Post by Boomzilla on Jun 7, 2022 13:14:48 GMT -5
Emotiva Stealth DC-1 DAC vs. Emotiva Big Ego+ DAC:
The venerable DC-1 was introduced (so far as I recall) back in 2013. In 2021, the Big Ego+ came out. The DC-1 sold for $500 (2013) dollars, while the Big Ego+ is about $80. Ignoring features for the moment, have advances in technology brought the sound of the BE+ up to DC1 standards? I can answer this one easily. They have not. The BE+ does offer amazing sound for its meager $80 price, but the DC-1 bests it in soundstage depth, width, and verisimilitude. Slam-dunk. Done.
I bought my used DC-1 for $200. Although used, it seems to function perfectly (and has a remote, that the BE+ lacks). It also has balanced outputs, more inputs (including an analog one), and more audio-nerd cred than any USB-powed DAC.
So the question is, "If Emotiva can offer the Big Ego+ with its level of performance at only $80, what could they do if they opted to offer a statement DAC for somewhere in the $800 range?"
The other question is "Does Emotiva even want the stereo market any more of have they sold their soul to Home Theater?"
Just sayin'
Boomzilla
|
|
hemster
Global Moderator
Particle Manufacturer
...still listening... still watching
Posts: 51,952
|
Post by hemster on Jun 7, 2022 14:33:22 GMT -5
Emotiva Stealth DC-1 DAC vs. Emotiva Big Ego+ DAC: The venerable DC-1 was introduced (so far as I recall) back in 2013. In 2021, the Big Ego+ came out. The DC-1 sold for $500 (2013) dollars, while the Big Ego+ is about $80. Ignoring features for the moment, have advances in technology brought the sound of the BE+ up to DC1 standards? I can answer this one easily. They have not. The BE+ does offer amazing sound for its meager $80 price, but the DC-1 bests it in soundstage depth, width, and verisimilitude. Slam-dunk. Done. I bought my used DC-1 for $200. Although used, it seems to function perfectly (and has a remote, that the BE+ lacks). It also has balanced outputs, more inputs (including an analog one), and more audio-nerd cred than any USB-powed DAC. So the question is, "If Emotiva can offer the Big Ego+ with its level of performance at only $80, what could they do if they opted to offer a statement DAC for somewhere in the $800 range?" The other question is "Does Emotiva even want the stereo market any more of have they sold their soul to Home Theater?" Just sayin' Boomzilla Give up on 2-channel audio? Pray that's not the case! IMHO, they should rethink low-end!
|
|
|
Post by Boomzilla on Jun 7, 2022 15:17:25 GMT -5
Give up on 2-channel audio? Pray that's not the case! IMHO, they should rethink low-end! I disagree - I think that the BasX series RULES in low-end. Superior value & awesome sound. Boom
|
|
|
Post by leonski on Jun 7, 2022 16:06:45 GMT -5
Artificial intelligence, in the way I'm using the term, is not whether or not a computer can mimic a human being, but rather enabling the computer to make decisions based on its OWN analysis of data (as opposed to having the computer rely on if/then programming alone to make decisions). In other words, if a situation arises that has multiple interpretations, allowing the computer to decide how to interpret the situation and make a response on its own without waiting for human confirmation. This happens already in a lot of military equipment. If a ship's gun sees a high-speed incoming object that isn't identified as friendly, the computer automatically decides what the object might be. Then based, on that, it decides on the best strategy (out of several) to destroy the object and implements that strategy without waiting for human response. Science fiction is rife with AI computers that do not perform as originally desired, but the computers all have in common the autonomy to evaluate situations and respond based on their own analysis. The range of autonomy allowed the computer determines, in my definition, what constitutes AI. Several factors must be present: Ability to choose how to interpret incoming data from multiple interpretations Ability to implement multiple outcomes based on the interpretation Autonomy from human approval for both the data analysis and outcome implementation Ultimately, this doesn't fit the formal definition of AI, but it's so close as to be trivial. All, or nearly all, of what you wish for comes under the heading of 'expert'.......Like maybe a surgeon deciding on the best approach to a specific operation or condition Expert systems with HUGE relational data bases are very good at some things...... Look what happened on Jeopardy with Deep Thought or whatever IBM came up with. Chess was EASY compared to the the task IBM engineers set for themselves. The device you mention doesn't decide.....except for how programmed with 'weighted' (is that right term?) decision trees..... Some of what is called AI is no doubt quite ingenious. Some photographic systems, for example with focus / exposure and later noise reduction and sharpening. VERY good, indeed. It is just my OPINON but I feel the word 'intelligence' should be reserved for some form of sentient being. Even dogs or other animals? This gets us to the difference between animals and people.....a conversatin we may reserve for offline or simply another time......CHEERS....
|
|
|
Post by leonski on Jun 7, 2022 21:06:09 GMT -5
First? checkers
Second? chess
Third? GO
The last is a primarily ASIAN game with I think only 9 simple rules. You can tteacch a child. Mastery is a lifetime. But a computer finally cracked t'hat most difficult nut.
|
|
|
Post by Boomzilla on Jun 8, 2022 14:29:01 GMT -5
|
|
|
Post by Boomzilla on Jun 8, 2022 16:30:28 GMT -5
...The device you mention doesn't decide.....except for how programmed with 'weighted' (is that right term?) decision trees... No. That's not at all what I'm talking about. The situation I'm describing is that the machine is programmed to deal with A, B, or C. But it's suddenly presented with X. This is totally outside the programming and decision trees. But if we allow a sufficiently advanced machine to do so, it should be able to compute a solution totally outside of the decision tree structure based on fundamental analysis, general problem solving algorithms, and previous experience. Is it still programmed? Yes, it is. But it forces the machine to do numerous things: 1. Self analyze - Is X truly outside of the programming, or is there a malfunction? 2. If X truly is outside of the programming, what core logic programming or previous data ("learning") will allow analysis? 3. What is the probability that the analysis is correct? 4. What are the consequences if action is taken but the analysis was wrong? 5. What are the consequences if no action is taken? 6. Which is the "best" decision and consequent action (or inaction)? To do this, a LOT of computational power is going to be needed. But we're rapidly reaching that level. We once had to build machines. Now machines can build themselves. We once had to program machines. But we're rapidly reaching the point where machines may be able to program themselves. The "network of things" is an example of massive computational power. In some cities, a network of web-connected video cameras watches most everyone on the streets. That network has face recognition available, and is told to watch for specific terrorists. The internet is monitored for specific words and phrases and those will be pulled from the total traffic for identification. To do this, ALL internet traffic must be monitored, and only the trigger words and phrases will be saved. This is not AI, but it is a strong indicator of the computing horsepower now regularly and widely available. Boom
|
|
|
Post by leonski on Jun 8, 2022 16:31:25 GMT -5
Where's the coin slot and where does the bottle come out?
Not a big criticism, but in drawings like this? SCALE makes a difference. Could be any size fomr a desktop or tower computer, to some kind of large cabinet / computer / 'device'?
|
|
|
Post by leonski on Jun 8, 2022 16:40:10 GMT -5
It is POSSIBLE that at some time in the future a machine will reach 'critical mass' and become self aware. I don't know if I want to be around when that happens.
Just to take one point from your post above? Point 1? It doesn't 'self analyze' No introspection or thought. It just looks at a BIG LIST ot stuff. Not on the list? go to point #2 and look for things like 'keyword' and whatever eelse. IBM did this with WATSON and Jeopardy. A HUGE relational database. Keyword search. Connectin algorithms. And when it got it wrong..... it was funny.
Poeple love the idea of AI, as if it'll ever be your buddy. (drinking or otherwise). People LOVE to ascribe 'intelligence'' to something which might just be 'ingenious'.....or 'Clever'? I love cats. Not really smart. Clever? Ingenious? Curious? But not much 'reasoning' beyond 'Is it good to eat?' or 'Will it chase me?'.......
HAL is still in the future.
|
|
|
Post by cwmcobra on Jun 8, 2022 16:58:03 GMT -5
It is POSSIBLE that at some time in the future a machine will reach 'critical mass' and become self aware. I don't know if I want to be around when that happens. I just watched the Terminator movies for the first time this week. Sounds like exactly what you're positing.... The younger generation will be left to figure it all out!
|
|
|
Post by Boomzilla on Jun 8, 2022 16:58:12 GMT -5
HAL may still be in the future, but the future is HAL...
|
|
|
Post by Boomzilla on Jun 8, 2022 17:04:43 GMT -5
Where's the coin slot and where does the bottle come out? Not a big criticism, but in drawings like this? SCALE makes a difference. Could be any size fomr a desktop or tower computer, to some kind of large cabinet / computer / 'device'? High voltage switchgear. Enclosed in a cabinet so that when the breaker explodes (which they sometimes do), the chances of killing the person operating the breaker are reduced. That said, we once had one expel its guts THROUGH the metal cabinet. Fortunately, the operator wasn't standing immediately in front of it when it blew. To prevent arcing, the contacts for high-voltage (and even medium-voltage) breakers must make and break very quickly. To create that acceleration, big springs are used. When the breaker connects the load (or disconnects the load), the springs must get the contacts apart quickly enough to prevent electrical arcing. One arc, and the breaker is due for a rebuild (if it isn't totally destroyed). We had an electrician who earned money on the side working on live 50,000VAC switchgear. He said the secret to doing it successfully was that you couldn't sweat. He said you could feel the voltage on your skin looking for a path to ground, but dry skin was insufficiently conductive. I figure he earned his money!
|
|
|
Post by leonski on Jun 8, 2022 20:20:36 GMT -5
I'm not certain I agree, but Steven Hawkings is ON RECORD as being wary of REAL intelligent machines........ www.bbc.com/news/technology-30290540Alarmist? Maybe, but I'd at least hear him out. We had a furnace 'lock up' and had to flip the breaker off......Element was I think 100amp or better, and the control circuit which is a LOT less. This was VERY high current equipment. A tech once dropped a 1/2" wrench and it bridged the output of a transformer the size of an end table.....and evaporated..... After service? the Tech flipped it ON and the breaker exploded. I was on the other side of the wall, watching the front panel of the machine and it sounded a LOT like a shotgun blast. Poor guy was weeks before he could even turn on a wallswitch for his reading light..... The quartz insides were melted to a sag and some silicon had been melted to a puddle. the Silicon Carbide service bits didn't like it, either....... Our IMPLANTERS ran to 200kev but not the kind of current I think you're talking about. the very high voltage power supplies are VERY different from stuff many people are used to in a stereo or even a TV...... We once blew a Fluke Meter (with HV probe) half across the room and turned it into semiconducting charcoal.....
|
|
|
Post by gus4emo on Jun 8, 2022 21:54:00 GMT -5
HAL may still be in the future, but the future is HAL... You know what is amazing? Computers might be wonders, in any way you might look at it....but they are still made by humans!
|
|