Tagged: artificial intelligence Toggle Comment Threads | Keyboard Shortcuts

  • feedwordpress 08:01:17 on 2019/03/22 Permalink
    Tags: , artificial intelligence, , gambling, , Massachusetts, Puritans, systems, ,   

    “How about a little magic?”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    sorcerers apprentice

     

    Once upon a time (bear with me if you’ve heard this one), there was a company which made a significant advance in artificial intelligence. Given their incredibly sophisticated new system, they started to put it to ever-wider uses, asking it to optimize their business for everything from the lofty to the mundane.

    And one day, the CEO wanted to grab a paperclip to hold some papers together, and found there weren’t any in the tray by the printer. “Alice!” he cried (for Alice was the name of his machine learning lead) “Can you tell the damned AI to make sure we don’t run out of paperclips again?”…

    What could possibly go wrong?

    [As you’ll read in the full and fascinating article, a great deal…]

    Computer scientists tell the story of the Paperclip Maximizer as a sort of cross between the Sorcerer’s Apprentice and the Matrix; a reminder of why it’s crucially important to tell your system not just what its goals are, but how it should balance those goals against costs. It frequently comes with a warning that it’s easy to forget a cost somewhere, and so you should always check your models carefully to make sure they aren’t accidentally turning in to Paperclip Maximizers…

    But this parable is not just about computer science. Replace the paper clips in the story above with money, and you will see the rise of finance…

    Yonatan Zunger tells a powerful story that’s not (only) about AI: “The Parable of the Paperclip Maximizer.”

    * Mickey Mouse, The Sorcerer’s Apprentice

    ###

    As we’re careful what we wish for (and how we wish for it), we might recall that it was on this date in 1631 that the Puritans in the recently-chartered Massachusetts Bay Colony issued a General Court Ordinance that banned gambling: “whatsoever that have cards, dice or tables in their houses, shall make away with them before the next court under pain of punishment.”

    Mass gambling source

     

     
  • feedwordpress 07:01:27 on 2019/03/12 Permalink
    Tags: , artificial intelligence, emotion, facila recognition, , , , , ,   

    “Outward show is a wonderful perverter of the reason”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    facial analysis

    Humans have long hungered for a short-hand to help in understanding and managing other humans.  From phrenology to the Myers-Briggs Test, we’ve tried dozens of short-cuts… and tended to find that at best they weren’t actually very helpful; at worst, they were reinforcing of stereotypes that were inaccurate, and so led to results that were unfair and ineffective.  Still, the quest continues– these days powered by artificial intelligence.  What could go wrong?…

    Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

    While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

    But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

    Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

    In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

    But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science…

    “Emotion detection” has grown from a research project to a $20bn industry; learn more about why that’s a cause for concern: “Don’t look now: why you should be worried about machines reading your emotions.”

    * Marcus Aurelius, Meditations

    ###

    As we insist on the individual, we might recall that it was on this date in 1989 that Tim Berners-Lee submitted a proposal to CERN for developing a new way of linking and sharing information over the Internet.

    It was the first time Berners-Lee proposed a system that would ultimately become the World Wide Web; but his proposal was basically a relatively vague request to research the details and feasibility of such a system.  He later submitted a proposal on November 12, 1990 that much more directly detailed the actual implementation of the World Wide Web.

    web25-significant-white-300x248 source

     

     
  • feedwordpress 09:01:09 on 2019/02/24 Permalink
    Tags: , , artificial intelligence, deepfake, GPT-2, , Pixar, ,   

    “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    robit writer

     

    Recently, OpenAI announced its latest breakthrough, GPT-2, a language model that can write essays to a prompt, answer questions, and summarize longer works… sufficiently successfully that OpenAI has said that it’s too dangerous to release the code (lest it result in “deepfake news” or other misleading mischief).

    Scott Alexander contemplates the results.  His conclusion:

    a brain running at 5% capacity is about as good as the best AI that the brightest geniuses working in the best-equipped laboratories in the greatest country in the world are able to produce in 2019. But:

    We believe this project is the first step in the direction of developing large NLP systems without task-specific training data. That is, we are developing a machine language system in the generative style with no explicit rules for producing text. We hope for future collaborations between computer scientists, linguists, and machine learning researchers.

    A boring sentiment from an interesting source: the AI wrote that when asked to describe itself. We live in interesting times.

    His complete post, eminently worthy of reading in full: “Do Neural Nets Dream of Electric Hobbits?

    [image above, and another account of OpenAI’s creation: “OpenAI says its new robo-writer is too dangerous for public release“]

    * Eliezer Yudkowsky

    ###

    As we take the Turing Test, we might send elegantly-designed birthday greetings to Steve Jobs; he was born on this date in 1955.  While he is surely well-known to every reader here, let us note for the record that he was was instrumental in developing the Macintosh, the computer that took Apple to unprecedented levels of success.  After leaving the company he started with Steve Wozniak, Jobs continued his personal computer development at his NeXT Inc.  In 1997, Jobs returned to Apple to lead the company into a new era based on NeXT technologies and consumer electronics.  Some of Jobs’ achievements in this new era include the iMac, the iPhone, the iTunes music store, the iPod, and the iPad.  Under Jobs’ leadership Apple was at one time the world’s most valuable company. (And, of course, he bought Pixar from George Lucas, and oversaw both its rise to animation dominance and its sale to Disney– as a product of which Jobs became Disney’s largest single shareholder.)

    Jobs source

     

     
  • feedwordpress 08:01:16 on 2018/08/02 Permalink
    Tags: artificial intelligence, , , , L'Enfant, modeling, secularization, , , , Washington D.C.   

    “It is impossible to work in information technology without also engaging in social engineering”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    ai religion

    … Using a separate model, Future of Religion and Secular Transitions (forest), the team found that people tend to secularize when four factors are present: existential security (you have enough money and food), personal freedom (you’re free to choose whether to believe or not), pluralism (you have a welcoming attitude to diversity), and education (you’ve got some training in the sciences and humanities). If even one of these factors is absent, the whole secularization process slows down. This, they believe, is why the U.S. is secularizing at a slower rate than Western and Northern Europe.

    “The U.S. has found ways to limit the effects of education by keeping it local, and in private schools, anything can happen,” said [LeRon] Shults’s collaborator, Wesley Wildman, a professor of philosophy and ethics at Boston University. “Lately, there’s been encouragement from the highest levels of government to take a less than welcoming cultural attitude to pluralism. These are forms of resistance to secularization.”

    When you build a model, you can accidentally produce recommendations that you weren’t intending. Years ago, Wildman built a model to figure out what makes some extremist groups survive and thrive while others disintegrate. It turned out one of the most important factors is a highly charismatic leader who personally practices what he preaches. “This immediately implied an assassination criterion,” he said. “It’s basically, leave the groups alone when the leaders are less consistent, [but] kill the leaders of groups that have those specific qualities. It was a shock to discover this dropping out of the model. I feel deeply uncomfortable that one of my models accidentally produced a criterion for killing religious leaders.”

    The results of that model have been published, so it may already have informed military action. “Is this type of thing being used to figure out criteria for drone killings? I don’t know, because there’s this giant wall between the secret research in the U.S. and the non-secret side,” Wildman said. “I’ve come to assume that on the secret side they’ve pretty much already thought of everything we’ve thought of, because they’ve got more money and are more focused on those issues. … But it could be that this model actually took them there. That’s a serious ethical conundrum.”

    Shults told me, “I lose sleep at night on this. … It is social engineering. It just is—there’s no pretending like it’s not.” But he added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. “It’s going to be done. So not doing it is not the answer.” Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

    “That’s why our work here is two-pronged: I’m operating as a modeler and as an ethicist,” Wildman said. “It’s the best I can do.”…

    Artificial Intelligence Shows Why Atheism Is Unpopular“– and other tales from the trenches of social modeling– the learnings and the ethical questions they raise.

    * Jaron Lanier

    ###

    As we’re careful what we wish for, we might send elaborately-designed birthday greetings to a practitioner of another, older form of social engineering, Pierre Charles L’Enfant; he was born on this date in 1754.  A military and civil engineer, he became a city planner, most famously crafting the unique “radiant” layout for Washington, D.C.

    210px-Pierre_Charles_L'Enfant source

     

     
  • feedwordpress 08:01:41 on 2018/04/25 Permalink
    Tags: artificial intelligence, , , , , , , , , ,   

    “Man is not born to solve the problem of the universe, but to find out what he has to do; and to restrain himself within the limits of his comprehension”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

    But now the robots are here to help…

    In new computer experiments, artificial-intelligence algorithms can tell the future of chaotic systems.  For example, researchers have used machine learning to predict the chaotic evolution of a model flame front like the one pictured above.  Learn how– and what it may mean– at “Machine Learning’s ‘Amazing’ Ability to Predict Chaos.”

    * Johann Wolfgang von Goethe

    ###

    As we contemplate complexity, we might might recall that it was on this date in 1961 that Robert Noyce was issued patent number 2981877 for his “semiconductor device-and-lead structure,” the first patent for what would come to be known as the integrated circuit.  In fact another engineer, Jack Kilby, had separately and essentially simultaneously developed the same technology (Kilby’s design was rooted in germanium; Noyce’s in silicon) and has filed a few months earlier than Noyce… a fact that was recognized in 2000 when Kilby was Awarded the Nobel Prize– in which Noyce, who had died in 1990, did not share.

    Noyce (left) and Kilby (right)

     source

     

     

     
  • feedwordpress 08:01:42 on 2018/04/24 Permalink
    Tags: Anthony Trollope, artificial intelligence, , , Joscha Bach, , Nick Bostrom, pillar pox, post box,   

    “Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    When warning about the dangers of artificial intelligence, many doomsayers cite philosopher Nick Bostrom’s paperclip maximizer thought experiment. [See here for an amusing game that demonstrates Bostrom’s fear.]

    Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.

    Harvard cognitive scientist Joscha Bach, in a tongue-in-cheek tweet, has countered this sort of idea with what he calls “The Lebowski Theorem”:

    No superintelligent AI is going to bother with a task that is harder than hacking its reward function.

    Why it’s cool to take Bobby McFerrin’s advice at: “The Lebowski Theorem of machine superintelligence.”

    * Alan Kay

    ###

    As we get down with the Dude, we might send industrious birthday greetings to prolific writer Anthony Trollope; he was born on this date in 1815.  Trollope wrote 47 novels, including those in the “Chronicles of Barsetshire” and “Palliser” series (along with short stories and occasional prose).  And he had a successful career as a civil servant; indeed, among his works the best known is surely not any of his books, but the iconic red British mail drop, the “pillar box,” which he invented in his capacity as Postal Surveyor.

     The end of a novel, like the end of a children’s dinner-party, must be made up of sweetmeats and sugar-plums.  (source)

     

     
  • feedwordpress 09:01:50 on 2018/01/24 Permalink
    Tags: artificial intelligence, , , , , , , ,   

    “Humans as we know them are just one morphological waypoint on the long road of evolution”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    Imagine a world where the human race is no longer the dominant species.

    Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

    Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

    Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

    Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

    Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

    It could be far into the future or The Day After Tomorrow.

    Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

    What is the world like then? After us…

    Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

    * Annalee Newitz in “When Will Humanity Finally Die Out?

    ###

    As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

     source

     

     
  • feedwordpress 09:01:50 on 2018/01/24 Permalink
    Tags: artificial intelligence, , , , , , ,   

    “Humans as we know them are just one morphological waypoint on the long road of evolution”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    Imagine a world where the human race is no longer the dominant species.

    Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

    Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

    Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

    Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

    Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

    It could be far into the future or The Day After Tomorrow.

    Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

    What is the world like then? After us…

    Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

    * Annalee Newitz in “When Will Humanity Finally Die Out?

    ###

    As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

     source

     

     
  • feedwordpress 09:01:50 on 2018/01/24 Permalink
    Tags: artificial intelligence, , , , , , ,   

    “Humans as we know them are just one morphological waypoint on the long road of evolution”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    Imagine a world where the human race is no longer the dominant species.

    Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

    Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

    Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

    Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

    Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

    It could be far into the future or The Day After Tomorrow.

    Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

    What is the world like then? After us…

    Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

    * Annalee Newitz in “When Will Humanity Finally Die Out?

    ###

    As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

     source

     

     
  • feedwordpress 09:01:50 on 2018/01/24 Permalink
    Tags: artificial intelligence, , , , , , , ,   

    “Humans as we know them are just one morphological waypoint on the long road of evolution”*… 


    Warning: preg_match_all(): Compilation failed: invalid range in character class at offset 7 in /homepages/23/d339537987/htdocs/pb/wp-content/themes/p2/inc/mentions.php on line 77

     

    Imagine a world where the human race is no longer the dominant species.

    Extinct through war or spectacular accident. By devastating pandemic, super-natural disaster, or cosmic cataclysm.

    Passed through the Singularity to become unrecognisably posthuman, and left the natural order forever behind.

    Infected by a virus, hijacked by a parasite or otherwise co-opted to become ex-human – a “bio zombie” – moved sideways to a new position as ecological actor.

    Gently absorbed into – or completely overshadowed by the unfathomable actions of – a superior civilisation comprising benevolent – or unacknowledging – emissaries from the stars (or extra-dimensions).

    Dethroned by the return of ancient species, the reawakening of the slumbering Old Ones… Out-competed by the arrival of an invasive species from another world making the Earth just one habitat in a galactic ecology.

    It could be far into the future or The Day After Tomorrow.

    Robots may rule the world… not so much enslaving as letting us retire to a life of Fully Automated Luxury Gay Space Communism; life in The Culture as Iain M. Banks foresaw it could be.

    What is the world like then? After us…

    Imagine a world where the human race is no longer the dominant species: “What is the Post-Human World.”

    * Annalee Newitz in “When Will Humanity Finally Die Out?

    ###

    As we stretch our frames, we might spare a thought for Marvin Minsky; he died on this date in 2016.  A biochemist and cognitive scientist by training, he was founding director of MIT’s Artificial Intelligence Project (the MIT AI Lab).  Minsky authored several widely-used texts, and made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics.  He holds several patents, including those for the first neural-network simulator (SNARC, 1951), the first head-mounted graphical display, the first confocal scanning microscope, and the LOGO “turtle” device (with his friend and frequent collaborator Seymour Papert).  His other inventions include mechanical hands and the “Muse” synthesizer.

     source

     

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
esc
cancel