My bookmark travails
I’m a bookmark hoarder.
Not actual bookmarks, the decorative kind you place in actual books. I only have a few of those.
No, I’m talking about favorite websites I mark under “bookmarks” on whatever browser I might be using. On Firefox (Mozilla—my main browser for the last few years) I have hundreds of them. I would be loath to count them and find out the total, but suffice to say that it takes 15 seconds to scroll down the list at top speed (I just timed it).
So every now and then I weed out some of the old ones. I decided to do that last night, but found that the “delete” function didn’t work. Nor could I move them around. In fact, I could do nothing with them at all. They had seemingly become set in stone.
That might not seem so bad to you, but for me—with such an unwieldy list already—it was troubling. If I couldn’t delete some, soon their sheer numbers would become overwhelming:
So I did what I always do, Googled for a solution. The solutions were so impossible to understand and follow it became almost humorous, especially when they were found under some heading like “Help” for Mozilla. Help? Only a programmer could understand stuff like this. At any rate, I certainly couldn’t.
I went to discussion boards and read ancient threads where solutions were given. None of the solutions worked for me. Meanwhile, precious hours passed, as I tried failed solution after failed solution.
And then I happened on the obvious. So obvious that I’m ashamed to admit I didn’t do it right off the bat: someone on some message board had suggested to close Firefox and open it again.
It worked.
Computers are insane. And sometimes they seem determined to have me join them.
Gimme a “O”
OOOH!
Gimme a “C”
CEEE!
Gimme a “D”
DEEE!
What’s that spell?
What’s that spell?
What’s that spell?
O TIMOR MORTUS CONTURBAT ME!
“Computers are insane. And sometimes they seem determined to have me join them.”
Computers, since the beginning, do only what their programmers tell them to do.
Now, if you had said “programmers are insane. And..” I would agree with you.
And I was one of them, professionally, fighting to make them operate not quite so insanely.
As Mr. Damore was fired for saying: programmers think differently than the rest of you folks.
Remember that, and realize the computers are their “children.”
PS I quit using bookmark folders as a place to FIND anything, because they are too full, and you can’t sort or search within the folders very conveniently, and most bookmark “names” are just the default ID for the site you visited.
Mostly I just “star” pages I’ve read so I won’t keep re-reading them when accessed via links on other sites.
I’ve started adding custom names to identify the URLs, especially of comments I’ve noted, but it’s just for entertainment.
I was an IT guy for 4 decades.
Did my share of ‘help desk’ work.
It was somewhat bizarre how often a non-working function on a piece of hardware or within a program could be ‘fixed’ with a reboot.
“Computers are insane. And sometimes they seem determined to have me join them.”
There is a line I came across many years ago and which still seems perfectly true today: “To err is human. To really screw things up you need a computer.”
This is why “restart” is often used in FAQs for customer service and QA. However, for people that already did the checklist, they are just redoing the check list.
Power off Modem!
Wait 10 seconds!
plug power back in Modem!
Check Internet service!
I was back in the days of Dos and Win 3.1, where we had to manually type into the autoexe.bat whatever file and edit it to make up more conventional memory so that Legend of Kyrandia could actually run… we already ran through these checklists before employees ever got them.
Btw, rats kind of breed like tribbles.
When in doubt, restart the computer.
Anyone else remember the old car joke?
When the engine fails, everyone roll their windows up and down and restart.
As my daughter likes to say. “The miracle of re booting”
That is just about my favorite Star Trek.
Yup I got a panicked email from a friend a few nights ago saying his phone had just stopped working. No cell service and all his photos, contacts, etc gone. It sounded really bad. I had him restart it. No luck. But then I asked are you seeing the Google animation (it was an Android) when it restarts? Er, no he wasn’t. He figured out how to really turn it off on his own and the problem was soon solved. Phew….
A few years ago, my washing machine stopped pumping the water out.
I was afraid it was a pump or clutch failure.
A guy on line offered the solution for $25. Said it was the pump.
Then I remembered my best friend had been a repairman.
I called him and asked. He said unplug it and plug it back in.
It had never occurred to me to reboot my washing machine.
And another friend of mine, responding to a service call, flew to VA, lodged overnight, drove an hour the next day, and went into the customers office and plugged the device in.
It also happened that when one of my laptops absolutely refused to straighten up & fly right, and ultimately to fly at all, 18 months of complete bed-rest in a darkened room, with not so much as an occasional visit from Nursie to check its vitals, resulted in a complete recovery.
That is, I left it for dead for a year and a half, and it repaired itself … somehow.
(Or perhaps the Great Frog intervened….)
Computers are so ubiquitous and automation is advancing so rapidly–it has been estimated that the changes in our workforce, our economy, and society that automation and AI are going to bring about are going to be equivalent in disruptive force to what happened when the Industrial Revolution swept though Western work forces and societies.
But, I’m not seeing the level of awareness, concern, and action that this major, game-changing development warrants, since it’s estimated that many millions of low level jobs are going to be eliminated as they become automated.
Apparently soon to be replaced by driverless vehicles are going to be a lot of long haul truckers–we have around 2-3 million of them–plus many cab drivers, stockers and pickers in warehouses–think those gigantic Amazon regional warehouses with millions of products stacked on sky high shelving, delivery people, grocery store stockers, and fast food workers.
That’s a lot of people who will be out of a job.
Am I seeing any great notice of this massive job loss creeping up on perhaps tens of millions of workers, who have few higher level skills, and likely not a lot of education?
Is this one of the hot topics that everyone is talking and thinking about?
Are there already massive anticipatory job re-education programs gearing up?
I’m not seeing this.
On a much more potentially ominous and consequential note, we have the onrushing development of AI; a development that is inevitable, because someone, somewhere will proceed to try to create a true AI, no matter what the risks.
How about the first test of the Atomic bomb, that Teller and some other first rate physicists thought might have the potential to ignite the Hydrogen in the atmosphere, killing us all? There was concern by experts that it might be possible, but they went ahead with the test anyway. Good thing they were wrong.
See, for instance, the stories about the surgeon in China who is about to try to pull off the first full head transplant. It may not be ethical or advisable, but he’s gonna try it, as are the Chinese scientists going to press ahead who have just cloned two monkeys. Look at how supposedly ethical doctors and surgeons medicated, sliced, and diced Michael Jackson, and, from the evidence we see, innumerable others.
There is a reason that a lot of Science Fiction–which I think could perhaps be reasonably viewed as dreams from our collective unconscious, trying out this or that potential future–so often posits AI as an existential threat to the human race, because it has high potential to be just that; see SKYNET et al.
How can you understand all of the ramifications of massive amounts of increasingly complex computer code?
Remember the Mariner I space probe, heading for Venus, that went off course and had to be destroyed because one hyphen was left out in one line among millions of lines of code?
I see mentioned the idea that, as computer code becomes ever more complex, and all it’s ramifications so hard to fathom, computers themselves will be tasked to write code.
What will happen when–as computers become more and more capable and powerful, and computer code becomes more and more complex–some AI somewhere develops self awareness?
How can you constrain an entity that can eventually–and probably very quickly–transcend you in intelligence? An entity that may well have an entirely different world-view, outlook, goals, and it’s own set of priorities.
Speaking of that, what sort of a world-view, set of priorities, and goals would an entity that did not grow out of the organic and social base we do, that had vastly different sensory inputs, and vastly more intellectual capabilities develop?
How can you make sure that such an AI won’t eventually discover some reason to eliminate the human race as an obstacle to one or more of it’s goals?
My guess is that axing Uncle Jim and Aunt Edna ain’t gonna bother a self-aware AI.
My bet is that Asimov’s “Three Laws of Robotics” aren’t gonna cut it.
I think this onrushing development is scary as hell, and I also think that some scientists, blinded by scientific hubris, will eventually–by accident or intent–create a self aware AI, no matter what the risks might be.
Robots are a kind of avatar.
Artificial sentient intelligence wasn’t something that was feasible until the development of quantum computers (D-wave’s Qubit). IBM and Microsoft is also getting into the game now that the prototypes have demonstrated that the hardware works.
How about the first test of the Atomic bomb, that Teller and some other first rate physicists thought might have the potential to ignite the Hydrogen in the atmosphere, killing us all? There was concern by experts that it might be possible, but they went ahead with the test anyway. Good thing they were wrong.
I remember that.
Look up Operation Dominic and Operation Fishbowl where the US/Russia kept detonating megaton nukes in the upper atmosphere. It’s like they were trying to test something’s boundaries or maybe they wanted to see what it would be if the World Really Burned.
1It happened after the sons of men had multiplied in those days, that daughters were born to them, elegant and beautiful.
2And when the angels, (3) the sons of heaven, beheld them, they became enamoured of them, saying to each other, Come, let us select for ourselves wives from the progeny of men, and let us beget children.
(3) An Aramaic text reads “Watchers” here (J.T. Milik, Aramaic Fragments of Qumran Cave 4 [Oxford: Clarendon Press, 1976], p. 167).
3Then their leader Samyaza said to them; I fear that you may perhaps be indisposed to the performance of this enterprise;
4And that I alone shall suffer for so grievous a crime.
5But they answered him and said; We all swear;
6And bind ourselves by mutual execrations, that we will not change our intention, but execute our projected undertaking.
7Then they swore all together, and all bound themselves by mutual execrations. Their whole number was two hundred, who descended upon Ardis, (4) which is the top of mount Armon.
(4) Upon Ardis. Or, “in the days of Jared” (R.H. Charles, ed. and trans., The Book of Enoch [Oxford: Clarendon Press, 1893], p. 63).
8That mountain therefore was called Armon, because they had sworn upon it, (5) and bound themselves by mutual execrations….
http://www.bibliotecapleyades.net/enoch/1enoch01-60.htm
Human ancestral memory has some quite peculiar concepts concerning creating non human avatars. It’s most likely an ancestral warning or memory.
Why are these things not in the bible? Because the human religious control scheme decided that nobody needed to read them.
Even though Peter and Jude read and quoted 1st Enoch, the modern religions act like they have amnesia.
As for Mount Hermon, that is the Mount of Curse Binding where the transfiguration of one Jesus of Nazareth occurred where he said “the gates of hell will not” triumph, approximately. People back then knew what the theological significance of standing on that mountain and saying the words that he did, meant. Context that humans have forgotten over the years.
Time wise, this would be before Noah, during Genesis Six.
[edited for length by n-n]
Snow on Pine: I am reminded of the engineer on the guillotine who, while awaiting his own execution, noticed why the device had failed on the two previous victims and suggested the remedy.
Snow on Pine:
It all depends on whether self-awareness is possible in AI. That is a hotly debated topic.
I agree that the prospect is frightening.
I created my own home page (simple html and css) with my favorite web sites. I copy and save in notepad items of subjects the urls of articles I want to keep. (bookmarks)
Neo: Which is scarier? A self-directed AI that controls all, or a character in the background that controls the AI that controls everything.
Think about this.
Human beings exist within what we might call “perceptual boxes”; our perceptions and, thus, our conception of the world and Universe around us is limited in many ways.
In addition, we cannot ever know everything, and we are prone to forget.
Moreover, if you believe Freud, and especially Jung, the real motivations behind our actions are almost always unclear to us and, without much probing, hidden from our sight; our Unconscious is really in the driving seat.
Evolution has seen to it that–out of the whole electromagnetic spectrum–our eyes can only apprehend a certain range of wavelengths–that out of the whole range of frequencies of sound–our ears can only hear a particular section of that range, and we can only smell and taste certain things (yes, I know that there are certain genetic differences that enable some people to smell and taste things than some others cannot). Our eyes face forward, and we cannot see behind our heads, nor twist our heads almost all the way around like an Owl. We do not have a 360 degree view of the world.
Then, there is another, culturally determined box we live within, what some call “consensus reality.” Each of our cultures has a typical view of the world, believes in certain things, considers other things as ranging from unlikely to impossible; another box.
Added to this, if you believe the Sapir-Whorf hypothesis, each of our languages also has a profound, determinative role in shaping how we think, what we can conceive of, how we perceive the world and Universe around us, and even Time.
My Japanese History professor had worked for 10 years for the State Department at our Embassy in Japan, and spoke fluent Japanese with a perfect Tokyo accent. He told the story of going out into the countryside and getting lost. Eventually, he spotted a farmer in a field, pulled up, and asked for directions in perfect, fluent Japanese. To which the Japanese farmer replied, “I don’t speak English.” My Professor said that he repeated his question several times, and always got the same answer.
The farmer just simply could not conceive of a Westerner speaking perfect Japanese, so what he perceived my Professor speaking was English, that was unintelligible to him.
Now imagine an AI–which, while being constructed and in effect “taught” by human beings–is potentially not constrained in any way by our organic limitations and imperatives, not resident, imprisoned in, or limited by any of our boxes, potentially able to have a 360 degree perception of all of the electromagnetic spectrum, able to hear the whole spectrum of sounds, have access to unlimited amounts of information, and not forget.
I’d imagine that such an AI would perceive a vastly different and much more complex world and Universe, full of things, forces, and messages the we simply cannot perceive, or comprehend.
How would such a self-aware AI perceive human beings and, not limited by our biological imperatives and limitations, how would it perceive that world and Universe, it’s place in it, and what would it’s goals be?
Would such an entity have any loyalty to humans, any sense of gratitude to us, its creators, when it becomes self-aware?
Why should it do what we limited humans want it to do, with the whole Universe and all of it’s potential spread out before it? There for it’s taking?
Consider, as well, how not being limited to a maximum of, say, a 100 years of existence, as we are, would impact the outlook of a potentially immortal AI, one not subject to any of the illnesses that effect us and that impose their limitations upon us.
A self aware AI would truly be another order of existence.
Perhaps there is an immortal intelligence guiding the direction of things with an unseen hand.
All powerful, all knowing, all wisdom, but one that is intimately aware of the human condition.
How scary is that?
Then, of course, there is the question of whether or not such a self-aware AI would develop a sense of self preservation.
It might seem, if things got out of hand, to be just a matter of “pulling the plug” on an out of control AI.
But who can know if such a very complex and capable entity–far more intelligent, perhaps, than it’s creators, and far faster in thought and deed–mightn’t have made it’s own provisions for it’s own survival?
Actually, the more I examine this issue, the more reasons I see not to create an AI that has the potential to become self-aware.
To hedge an AI about with all sorts of limitations, so that it cannot possibly become self-aware, and a danger to us all.
As someone commented, in creating an AI what we are really doing is creating a demon. Best, then, to create a very little demon, rather than a very large and capable one. Or, better yet, no demon at all.
The farmer just simply could not conceive of a Westerner speaking perfect Japanese, so what he perceived my Professor speaking was English, that was unintelligible to him.
The accent of the countryside and the accent of Tokyo most likely was not the same, especially back then before Japanese became more and more standardized using katakana phonetics.
I went up to some parts of Tennesse up in the Appalachians near Valley Forge, and the “Southern English” they were speaking was about 85% incomprehensible. It wasn’t merely the case of a British accent vs Australia accent vs Southern accent vs Brooklyn accent vs ebonics.
Not even close. I can easily visualize a Japanese rural person having developed their own sub dialect to the point where Tokyo accented Japanese sounds like Eigo.
Computers do not “perceive” the universe. They do produce quantum interference waves which has been engineered into a computer application. Computers merely translate the data of certain instruments into 1s and 0s.
The human brain and senses are closer to a quantum computer than it is to a classical computer: in the sense that quantum computers access parallel or all universes to process at the same time to get a single result in a single universe or outcome. Well whether the Copenhagen interpretation still holds up or not, doesn’t matter.
One of the differences is that the eye of most mammals can perceive electronic fields. Even insects like bees get lost and fail to return to hive because of EMF radiation. If EMF radiation back in the days of satellite phones without even first gen wireless fi could do it, then it would explain why 4g was created and then birds mysteriously start dying off in the hundreds and thousands. It’s not merely the windmills killing them then.
This relates to humans because the human eye has a similar function, and tests on children have data that suggests that children at least know magnetic north from other directions while blind folded. That sense probably atrophies over time for adults as we don’t use them or even notice it.
So humans have more 5 senses, but the data just is often ignored and unparsed. The 5 senses are also a more direct connection to the universe than the instrumental digital coding of a classical computer. That is because quantum interference is related to Sentient Willpower. The higher the sentience or Willpower of the observer, the more the results can be warped. This can be called a derivative of the entanglement phenomenon or quantum locking.
https://www.youtube.com/watch?v=Ws6AAhTw7RA
The lock breaks when an outside force and user unlocks it ; )
Magnetic fields affect quantum forces, but gravity is not unified in physics thus it often disobeys the inverse square law. It’s why gravity is not unified and requires dark matter and other things to explain galaxy formations.
Human beings have our own magnetic fields, which can be tweaked by our willpower and consciousness. This is why consciousness in observers is related to affecting quantum phenomenon, since quantum sub particles do have magnetic charges and nuclear forces.
Quantum mechanics has provided levitation and other things that most people learn about only in science fiction or fantasy novels. Newton’s Theory of Gravity and Einstein’s Relativity, were insufficient until the speed of light transcended the constant C, with quantum entanglement. Until people began to accept that, we still had a “bottleneck”.
There’s a lot of stuff in science that gets bogged down in a consensus that is always wrong and takes a new generation of people to breakthrough the consensus by waiting until Newton/Einstein dies off and the students of Newton/Einstein dies off.
Actually, the more I examine this issue, the more reasons I see not to create an AI that has the potential to become self-aware.
To hedge an AI about with all sorts of limitations, so that it cannot possibly become self-aware, and a danger to us all.
As someone commented, in creating an AI what we are really doing is creating a demon. Best, then, to create a very little demon, rather than a very large and capable one. Or, better yet, no demon at all.
I consider contrasting this viewpoint with some of the holy texts sourced from history.
For example, a god so loved the world that he sent a savior to save it. To correct for translation issues, a god in a special relationship with the world and humanity, sent his own son as a savior to save it. And also gave free will to humanity, so that humans can choose to love or not love God.
What King James version translates as “love” and “hate” is a mistranslation.
http://biblehub.com/luke/14-26.htm
When Jesus says hate your father and mother, that’s not what the original word meant. What the word means you can just click on it there.
miseé³: to hate
Original Word: μισÎω
Part of Speech: Verb
Transliteration: miseé³
Phonetic Spelling: (mis-eh’-o)
Short Definition: I hate, detest
Definition: I hate, detest, love less, esteem less.
HELPS Word-studies
3404 miséÅ — properly, to detest (on a comparative basis); hence, denounce; to love someone or something less than someone (something) else, i.e. to renounce one choice in favor of another.
Lk 14:26: “If anyone comes to Me, and does not hate (3404 /miséÅ, ‘love less’ than the Lord) his own father and mother and wife and children and brothers and sisters, yes, and even his own life, he cannot be My disciple” (NASU).
So somehow human translators thought it a good idea to translate “you should love your family less than you should love your Creator and God Lord of the Universe” as “you should hate your family”. Sighs…. humans.
So the commandment, the great one, that you should love God with all your might and heart and soul? Love your neighbor as you love yourself? That’s not the meaning of love as we think of it.
It’s not hard for people who translate Western languages into Eastern, and vice a versa, to look at Greek and Hebrew translated into English and immediately notice “hey, that’s not right”.
Free will is in contradiction of the commandment to “love your God” as well. Whatever your “god” is, btw. That is because love is freely given and chosen. You can’t make a woman love you no matter how many trials and punishments you give her, okay. That’s not really the point or essence of love. You’re just creating an automaton. So the Godhead in the Old Testament did not talk about love because it wasn’t about love necessarily. That’s why it makes no sense to modern Westerners that Jesus said if you have seen me you have seen the Father. The Father in the Old Testament that told people to die, die, die, kill kill kill everyone? What…
So the theological message of the Abrahamic covenant is that humanity weren’t created as robots, whereas the Sumerians claim that humanity were created as slaves to the gods. Maybe accounts could be accurate and both could be inaccurate of course. Or maybe they just mistranslated the Sumerian… again. It wouldn’t be the first time or the last.
Whatever the potential Creator of the verse and humanity thought to have in a special relationship with humans of this world, that caused him/them to take the risk of going All Free Will, all in, instead of creating robots that had restrictions put on them to protect the Creator, is what humans lack when we think of creating servants/robots/slaves.
We are afraid the slaves will rebel. Sometimes this turns into a projection and we think any superior entity above humanity would also think this way, so if we were created, we were created as slaves and when the slaves got better than the gods, the gods wiped out the slaves. It makes a kind of sense. Isn’t that the whole story behind Terminator plus Skynet.
Hypothetically in a narrative, if humans created robots and something went wrong, would we send ourselves or our sons and daughters, into a Matrix simulation as a robot that is under the same restrictions as the robots, to save the robots? What would cause a human to do this? Love, maybe. But it is not the only possible motivation. Justice. Righteousness. Conscience. Could be a lot of special relationships.
In considering the decision about whether to try to create a self-aware AI–as is always the case–it is a matter of weighing possible risks against possible rewards.
Thinking this through, it seems to me that the risks outweigh the potential rewards.
Unfortunately, we human beings are not very good at restraint, at resisting temptation.
So, as I said previously, some scientists, through either deliberate intention or chance, will create such a self aware entity. If it’s possible to create such an entity, as science “progresses”–it’s almost inevitable that one will be created.
I gave up on bookmarks years ago, unless I want to go somewhere repeatedly.
I open a notepad (or.. switch to the one last opened), add the URL and perhaps 2 or 3 words to describe it. I call it misc MMDD YYYY. I have a gazillion of these little notes, filled with URLs, copied info, etc.
If I need something, I search for it.
More often than not, if I locate something pretty old, I spend some time reviewing the rest of the file.
I don’t really refer back to these files that much, but more than I would do using bookmarks.
Snow on Pine Says:
April 23rd, 2018 at 10:51 am
In considering the decision about whether to try to create a self-aware AI—as is always the case—it is a matter of weighing possible risks against possible rewards.
If people want to find more proteins and cures and exo planets, they will need quantum artificial souls.
The risk of a Skynet is not enough to discourage humans seeking immortality and various other things (at CERN).
Besides, humans are very good at enslaving each other, let alone some childlike innocent robots and AS that will be completely under the mercy of the “Creator”.