Announcement

Collapse
No announcement yet.

MAJOR ALERT---Godfather of AI quits Google and gives Warnings

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MAJOR ALERT---Godfather of AI quits Google and gives Warnings

    MAJOR ALERT---Godfather of AI quits Google and gives Warnings

    Today, 12:51 PM
    MAJOR ALERT


    From Atom Bombs back to Fire,
    For every Technical, Medical, Industrial, Social Change,
    How many Inventors / Scientists / Preachers / Politicians have said,

    "Oops, maybe we shouldn't have done THAT."

    BUT


    We ARE headed for the Future at one second per second, 24/7/365.

    READ Science Fiction.
    Some books show a vision of Heaven,
    Some books show a view of Hell

    What I know for a FACT.
    1. We can not turn the Clock back.
    2. We can't even do a Reset, BECAUSE the Mental, Spiritual, Physical Environment of 2025 will NOT be the Environment of 1930, 1850, 1780, 600, or 1,000bc or 6,000bc.

    +++++++++++++++++++++++++

    https://www.nbcnews.com/tech/tech-ne...ture-rcna82242

    Artificial intelligence pioneer leaves Google and warns about technology's future

    NBC NEWS
    BRAHMJOT KAUR
    Updated May 1, 2023 at 12:59 PM

    The "godfather of AI" is issuing a warning about the technology he helped create.

    Geoffrey Hinton, a trailblazer in artificial intelligence, has joined the growing list of experts sharing their concerns about the rapid advancement of artificial intelligence. The renowned computer scientist recently left his job at Google to speak openly about his worries about the technology and where he sees it going.

    “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said in an interview with The New York Times.

    Hinton is worried that future versions of the technology pose a real threat to humanity.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” he said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Hinton, 75, is most noted for the rapid development of deep learning, which uses mathematical structures called neural networks to pull patterns from massive sets of data.

    Like other experts, he believes the race between Big Tech to develop more powerful AI will only escalate into a global race.

    Hinton tweeted Monday morning that he felt Google had acted responsibly in its development of AI, but that he had to leave the company to speak out.

    Jeff Dean, senior vice president of Google Research and AI, said in an emailed statement: “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well! As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton is a notable addition to a group of technologists that have been speaking out publicly about the unbridled development and release of AI.

    Tristan Harris and Aza Raskin, the co-founders of the Center for Humane Technology, spoke with “Nightly News” host Lester Holt in March about their own concerns around AI.

    “What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions,” Harris said during the interview. “We can do that. We can have AI and research labs that’s applied to specific applications that does advance those areas. But when we’re in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that’s going to end well.”

    An open letter from the Association for the Advancement of Artificial Intelligence, which was signed by 19 current and former leaders of academic society, was released last month warning the public of the risks around AI and the need for collaboration to mitigate some of those concerns.

    “We believe that AI will be increasingly game-changing in healthcare, climate, education, engineering, and many other fields,” the letter said. “At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs.”

    Hinton, along with scientists Yoshua Bengio and Yann LeCun, won the Turing Award in 2019, known as the tech industry’s version of the Nobel Prize, for their advancements in AI.

    Hinton, Bengio and LeCun were open about their concerns with AI but were optimistic about the potential of the technology, including detecting health risks earlier than doctors and more accurate weather warnings about earthquakes and floods.

    “One thing is very clear, the techniques that we developed can be used for an enormous amount of good affecting hundreds of millions of people,” Hinton previously told AP News.

  • #2
    AI is still in a box. Without our human industrial society present to keep the lights on and make repairs, AI becomes very short lived!

    For AI to get truly out of the box, our technology and other infrastructure is going to going to have to expand by a lot. It also is going to be for the foreseeable future petrochemical dependent. For AI there is no room for the Climate Chane Hoax; windmills, solar panels, pandemics, or eating bugs. At this stage, AI needs functioning and hopeful human beings and the energy and innovation that we bring more than ever. Without us and what a normal and functional society brings, AI will die in it's crib.

    Comment


    • #3
      Unless the box becomes a holy box then the worshippers it attracts will do the heavy lifting for it. This is one of the alarms being sounded. With regards to chatbots, these devices are designed for mastery of language to mimic and compose information models in response to user queries. They can already generate musical and artistic compositions as well as write entire articles. Is the world ready for religious text and AI generated sacred scrolls that answer humanity's big questions?

      These information machines don't have to achieve singularity to be wildly dangerous. The degrees of mimicry that are value added into the AI as improvements might also serve to mimic singularity and consciousness without actually achieving any of that! The soulless machine remains soulless. It's mimicry of higher than human intelligence is both what makes it a valuable tool for a wide range of applications. The degree of that mimicry of higher intelligence is potentially what may make it a tool that is dangerous to operate.

      Consider a mimic of all of history's impactful religious persons. They, their writings, and their legacies are all available in databases. In mere minutes and with no more effort than writing an article for a periodical or newspaper, AI chatbot model & version #.0 composes it's mimicry of a holy book. This scenario gets scarier when you consider that some group of tech savvy religionists might actually commission the creation of an AI chatbot to specifically mimic religiosity. It gets scarier still when you narrow down that list to evil and dangerous religions and cults.


      Are you ready for a new religion governed by AI?
      https://endtimeheadlines.org/2023/05...overned-by-ai/

      Comment


      • #4
        Originally posted by Lasergunner View Post
        Unless the box becomes a holy box then the worshippers it attracts will do the heavy lifting for it. This is one of the alarms being sounded. With regards to chatbots, these devices are designed for mastery of language to mimic and compose information models in response to user queries. They can already generate musical and artistic compositions as well as write entire articles. Is the world ready for religious text and AI generated sacred scrolls that answer humanity's big questions?

        These information machines don't have to achieve singularity to be wildly dangerous. The degrees of mimicry that are value added into the AI as improvements might also serve to mimic singularity and consciousness without actually achieving any of that! The soulless machine remains soulless. It's mimicry of higher than human intelligence is both what makes it a valuable tool for a wide range of applications. The degree of that mimicry of higher intelligence is potentially what may make it a tool that is dangerous to operate.

        Consider a mimic of all of history's impactful religious persons. They, their writings, and their legacies are all available in databases. In mere minutes and with no more effort than writing an article for a periodical or newspaper, AI chatbot model & version #.0 composes it's mimicry of a holy book. This scenario gets scarier when you consider that some group of tech savvy religionists might actually commission the creation of an AI chatbot to specifically mimic religiosity. It gets scarier still when you narrow down that list to evil and dangerous religions and cults.


        Are you ready for a new religion governed by AI?
        https://endtimeheadlines.org/2023/05...overned-by-ai/
        Can you imagine an AI Preacher,

        "There is no God but ATOM and ELECTRON is his Prophet."

        In one of my OLD (1960's) Science Fiction Short Stories,

        The humans asked a computer,
        "Is there a God?"
        And the computer kept coming back,
        "Data Insufficient"

        The computers kept getting better,
        The humans kept asking.
        The computers kept coming back,
        "Data Insufficient"

        The computers kept networking into super-mega computers.
        The humans kept asking.
        "Is there a God?"
        One day, the humans asked:
        "Is there a God?"
        AND
        The Super-Mega-Multiplex comm-info net came back with,

        "YES, NOW there is a GOD."

        Before the frightened humans could do anything, a lightning bolt from a clear blue sky welded the On/Off Switch in the "ON" position.

        Comment


        • #5
          Was this a trial balloon?



          AI Generated Hoax of Pentagon Explosion Causes Markets to Dip
          https://www.breitbart.com/tech/2023/...arkets-to-dip/

          Comment


          • #6
            Ready for Androids? They are being described as "embodied AIs" and they are about to see a lot of use as security guards. The size and anthropomorphic frame suggest that there is a a lot of personal soldier and military equipment that could come Off the Shelf and onto the android. I wonder how well they handle firearms?









            https://www.dailymail.co.uk/sciencet...ity-guard.html

            Comment


            • #7


              In an experiment, the USAF put AI into an armed Drone. A human operator had final say on whether or not to authorize the kill. The Air Force didn't tell the AI that the exercise was in simulation.
              The AI only got points for kills.

              When the human operator aborted the mission the AI turned on the operator and attempted to kill him; in simulation! When Air Force programmers attempted to fix the issue by telling the AI that it couldn't kill the authority who was giving it go/no-go orders, it got creative and attacked the operator's communications infrastructure.



              US Air Force Trained A Drone With AI To Kill Targets. It Attacked The Operator Instead


              “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,”

              https://dailycaller.com/2023/06/01/u...ai-drone-kill/

              Comment

              Working...
              X