Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

文章来源于互联网:Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

Meta, a leading name in the tech industry, has made a significant leap in artificial intelligence (AI) by developing Voicebox, an advanced tool capable of generating lifelike speech.

Despite the tool’s potential, the company has chosen not to release it immediately due to concerns about potential misuse.

Voicebox

Announced last Friday, Voicebox can create convincing voice dialogue, opening up a range of possibilities, from enhancing communication across languages to delivering lifelike character dialogue in video games.

Unique in its functionality, Voicebox can generate speech it wasn’t specifically trained for.

All it requires is some text input and a small audio clip, which it then uses to create a whole new speech in the voice of the source audio.

In a breakthrough from traditional AI speech tools, Voicebox learns directly from raw audio and its corresponding transcription, eliminating the need for task-specific training with carefully curated datasets.


Moreover, this impressive tool can produce audio in six languages – English, French, German, Spanish, Polish, and Portuguese – offering a realistic representation of natural human speech.

Potential Misuse and Meta’s Precautionary Approach

While Voicebox opens up exciting possibilities, Meta is fully aware of the potential misuse of such a tool.

The AI tool could be misused to create ‘deepfake’ dialogues, replicating the voices of public figures or celebrities in an unethical manner.

To counter this risk, Meta has developed AI classifiers, akin to spam filters, that can differentiate between human speech and speech generated by ‘Voicebox’.

The company is advocating for transparency in AI development, coupled with a firm commitment to responsible use. As part of this commitment, Meta has no current plans to make ‘Voicebox’ publicly available, emphasizing the need to balance openness with responsibility.

Instead of launching a functional tool, Meta is offering audio samples and a research paper to help researchers understand its potential and work towards responsible use.

Global Concerns Over AI Misuse

The rapid advancements in AI are causing concern among global leaders, including the United Nations (UN).

Deepfakes have been utilized in scams and have propagated hate and misinformation online, as highlighted in a recent UN report.

Creating AI tools like ‘Voicebox’ offers numerous possibilities but underscores the importance of cautious development and responsible use to prevent misuse.

As we continue to stride forward in the field of AI, these concerns will remain paramount.

AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

文章来源于互联网:AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

A recent study conducted by researchers from Stanford University concludes that current large language models (LLMs) such as OpenAI’s GPT-4 and Google’s Bard are failing to meet the compliance standards set by the European Union (EU) Artificial Intelligence (AI) Act.

Understanding the EU AI Act

The EU AI Act, the first of its kind to regulate AI on a national and regional scale, was recently adopted by the European Parliament.

It not only oversees AI within the EU, a region housing 450 million people but also sets the precedent for AI regulations globally.

However, as per the Stanford study, AI companies have a considerable distance to cover to attain compliance.

Compliance Analysis of AI Providers

In their study, the researchers evaluated ten major model providers against the 12 requirements of the AI Act, scoring each provider on a 0 to 4 scale.

Stanford’s report says:

“We present the final scores in the above figure with the justification for every grade made available. Our results demonstrate a striking range in compliance across model providers: some providers score less than 25% (AI21 Labs, Aleph Alpha, Anthropic) and only one provider scores at least 75% (Hugging Face/BigScience) at present. Even for the highest-scoring providers, there is still significant margin for improvement. This confirms that the Act (if enacted, obeyed, and enforced) would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.”

The findings displayed a significant variation in compliance levels, with some providers scoring below 25%, and only Hugging Face/BigScience scoring above 75%.

This suggests a considerable scope for improvement even for high-scoring providers.

The Problem Areas

Do Foundation Model Providers Comply with the Draft EU AI Act? Problem Areas

The researchers highlighted key areas of non-compliance, including a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions, and risk mitigation methodology.

They also observed a clear difference between open and closed model releases, with open releases providing better disclosure of resources but posing bigger challenges in controlling deployment.

The study concludes that all providers, regardless of their release strategy, have room for improvements.

A Reduction in Transparency

In recent times, major model releases have seen a decline in transparency.

OpenAI, for instance, chose not to disclose any data and compute details in their reports for GPT-4, citing competitive landscape and safety implications.

Potential Impact of the EU AI Regulations

The Stanford researchers believe that the enforcement of the EU AI Act could significantly influence the AI industry.

The Act emphasises the need for transparency and accountability, encouraging large foundation model providers to adapt to new standards.

However, the swift adaptation and evolution of business practices to meet regulatory requirements remain a major challenge for AI providers.

Despite this, the researchers suggest that with robust regulatory pressure, providers could achieve higher compliance scores through meaningful yet feasible changes.

The Future of AI Regulation

The study offers an insightful perspective on the future of AI regulation.

The researchers assert that if properly enforced, the AI Act could substantially impact the AI ecosystem, promoting transparency and accountability.

As we stand on the threshold of regulating this transformative technology, the study emphasises the importance of transparency as a fundamental requirement for responsible AI deployment.

OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

文章来源于互联网:OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

AI chatbot, ChatGPT, a creation of OpenAI, is under scrutiny due to its frequent inability to distinguish fact from fiction, leaving users often led astray by the information it provides.

The Warning Sign Often Ignored

OpenAI has highlighted on its homepage one of the many limitations of ChatGPT – it may sometimes provide incorrect information.

Although this warning holds true for several information sources, it brings to light a concerning trend. Users often disregard this caveat, assuming the data provided by ChatGPT to be factual.

Unreliable Legal Aid, The Case of Steven A. Schwartz

The misleading nature of ChatGPT came into stark focus when US lawyer Steven A.

Schwartz turned to the chatbot for case references in a lawsuit against Colombian airline Avianca. In a turn of events, all the cases the AI suggested turned out to be non-existent.

Despite Schwartz’s concerns about the veracity of the information, the AI reassured him of its authenticity.

Such instances raise questions about the chatbot’s reliability.

A Misunderstood Reliable Source?

The frequency with which users treat ChatGPT as a credible source of information calls for a wider recognition of its limitations.

Over the past few months, there have been several reports of people being misled by its fallacies, which have been largely inconsequential but nonetheless worrying.

One concerning instance involved a Texas A&M professor who used ChatGPT to verify if students’ essays were AI-generated.

ChatGPT confirmed, incorrectly, that they were, leading to the threat of failing the entire class. This incident underscores the risk of the misinformation that ChatGPT can spread, potentially leading to more serious consequences.

Cases like these do not entirely discredit the potential of ChatGPT and other AI chatbots. In fact, these tools, under the right conditions and with adequate safeguards, could be exceptionally useful.

However, it’s crucial to realize that at present, their capabilities are not entirely reliable.

The Role of the Media and OpenAI

The media and OpenAI bear some responsibility for this issue.

Media often portrays these systems as emotionally intelligent entities, failing to emphasize their unreliability. Similarly, OpenAI could do more to warn users of the potential misinformation that ChatGPT can provide.

Recognizing ChatGPT as a Search Engine

The tendency of users to utilize ChatGPT as a search engine should be acknowledged by OpenAI, leading them to provide clear and upfront warnings.

Chatbots present information in a regenerated text format and a friendly, all-knowing tone, making it easy for users to assume the information is accurate.

This pattern reinforces the need for stronger disclaimers and cautionary measures from OpenAI.

The Path Forward

OpenAI needs to implement changes to reduce the likelihood of users being misled.

This could include programming ChatGPT to caution users to verify its sources when asked for factual citations, or making it clear when it is incapable of making a judgment.

OpenAI has indeed made improvements, making ChatGPT more transparent about its limitations.

However, inconsistencies persist and call for more action to ensure that users are fully aware of the potential for error and misinformation.

Without such measures, a simple disclaimer like “May occasionally generate incorrect information” seems significantly inadequate.

Spotting AI-Written Text Gets Easier with New Research

文章来源于互联网:Spotting AI-Written Text Gets Easier with New Research

Researchers have found a new method to determine whether a piece of text was penned by a human or an artificial intelligence (AI).

This new detection technique leverages a model named RoBERTa, which helps to analyze the structure of text.

Finding the Differences

The study revealed that the text produced by AI systems, such as ChatGPT and Davinci, displays different patterns compared to human text.

When these texts were visualized as points in a multi-dimensional space, it was found that the points representing AI-written text occupied a lesser area than the points representing human-written text.

Using this key difference, researchers designed a tool that can resist common tactics employed to camouflage AI-written text.

The performance of this tool remained impressive even when it was tested with various types of text and AI models, showing high accuracy.

However, its accuracy decreased when the tool was tested with a sophisticated hiding method called DIPPER.

Despite this, it still performed better than other available detectors.

One of the exciting aspects of this tool is its capability to work with languages other than English. The research showed that while the pattern of text points varied across languages, AI-written text consistently occupied a lesser space than human-written text in every specific language.

Looking Ahead

While the researchers acknowledged that the tool faces difficulties when dealing with certain types of AI-generated text, they remain optimistic about potential enhancements in the future.

They also suggested exploring other models, similar to RoBERTa, for understanding the structure of text.

Earlier this year, OpenAI introduced a tool designed to distinguish between human and AI-generated text.

Although this tool provides valuable assistance, it is not flawless and can sometimes misjudge. The developers have made this tool publicly available for free to receive feedback and make necessary improvements.

These developments underscore the ongoing endeavors in the tech world to tackle the challenges posed by AI-generated content. Tools like these are expected to play a crucial role in battling misinformation campaigns and mitigating other harmful effects of AI-generated content.

Polygon Introduces AI Chatbot Assistant, Polygon Copilot

文章来源于互联网:Polygon Introduces AI Chatbot Assistant, Polygon Copilot

Polygon, a well-known developer that provides Ethereum scaling solutions, has leaped into the future of Web3. They have introduced an AI chatbot assistant, Polygon Copilot, to their platform.

What is Polygon Copilot?

Imagine a personal guide that can help you navigate the expansive ecosystem of decentralized applications (dApps) on Polygon.


Polygon Copilot is just that! It’s an AI assistant that can answer your questions and provide information about the Polygon platform.

It comes with three different user levels: Beginner, Advanced, and Degen, each designed for users at different stages of familiarity with the ecosystem.

The assistant is built on OpenAI’s GPT-3.5 and GPT-4 models and is incorporated into the user interface of Polygon.

One of the main goals of the Copilot is to offer insights, analytics, and guidance based on the Polygon protocol documentation.

A standout feature of Polygon Copilot is its commitment to transparency. It discloses the sources of the information it gives, which enables users to verify the information and explore the topic further.

Polygon’s step towards integrating AI technology is part of a growing trend in the Web3 world.

Other companies including Alchemy, Solana Labs, and Etherscan are also harnessing the potential of AI.

Using Polygon Copilot

To start with Polygon Copilot, users need to connect a wallet that will serve as the user account.

This account is given credits for asking questions, with new credits added every 24 hours.

And what sets Polygon Copilot apart? It’s not just any plain-speaking AI; it has a flair of its own. Ask it about the top NFT project on Polygon, and you’ll get a response full of personality.

However, it’s essential to remember that like all AI technology, Polygon Copilot isn’t perfect.

Users are cautioned that the AI may provide inaccurate information and to take the chatbot’s answers with a grain of salt.

Polygon has set limits on the number of responses the chatbot can generate to prevent spamming and overload.

What’s Polygon All About?

Polygon presents itself as ‘Ethereum 2.0’, addressing scalability issues within the Ethereum blockchain.

It enhances the value of any applications built on the Ethereum blockchain.

The introduction of the AI assistant is a leap forward for the platform. Whether you are a beginner looking for basic guidance or an advanced user trying to build complex products, Polygon Copilot is there to assist.

It’s also handy for analysts seeking accurate data about NFTs and dApps.

Web3 and the Promise of Data Ownership

Polygon’s use of AI reflects the evolution of the internet, known as Web 3.0. This version of the internet promises safety, transparency, and control over the data created by users.

Web 3.0 operates on blockchain technology, a decentralized system that removes corporate access to private data.

Blockchains were born alongside Bitcoin, the first cryptocurrency, aiming to break free from corporations’ control over our data.

In the spirit of Web 3.0, platforms like Polygon allow users to control access to their data and attach value to it, enhancing data ownership.

As the tech world moves forward, innovations like Polygon Copilot highlight the growing intersection between artificial intelligence and blockchain technology, redefining user experience in the process.

Amazon Invests $100 Million in Generative AI Center to Stay Competitive

文章来源于互联网:Amazon Invests $100 Million in Generative AI Center to Stay Competitive

Amazon Web Services (AWS), the cloud computing division of Amazon, has announced a strategic investment of $100 million in a new initiative called the AWS Generative AI Innovation Center.

This move aims to bolster startups and businesses focused on generative artificial intelligence, a rapidly growing field in AI.

The investment underscores AWS’s commitment to staying at the forefront of technological advancements as it competes with industry giants like Microsoft and Google.

Generative AI is a subset of AI that goes beyond traditional classification and prediction algorithms. Instead, it enables the generation of new content, including text, images, and music, based on learned patterns.

This innovative technology has the potential to significantly enhance productivity and creativity by offering novel solutions and ideas.

AWS’s Ongoing Efforts in Generative AI

AWS Acknowledges the Importance of Generative AI in the Competitive Landscape

The AWS Generative AI Innovation Center aims to connect AWS-affiliated experts, including data scientists, strategists, engineers, and solutions architects, with customers and partners to accelerate enterprise innovation in the field of generative AI.

By encouraging collaboration and delivering resources, AWS seeks to empower businesses in leveraging generative AI to drive success and growth.

Sri Elaprolu, heading the AWS Generative AI Innovation Center, highlighted the program’s objectives and its potential impact on various sectors.

Initially, the center will prioritize customers who have demonstrated interest in generative AI, focusing on industries such as financial services, healthcare, life sciences, media and entertainment, automotive, energy, utilities, and telecommunications.

This $100 million investment follows AWS’s recent efforts to promote generative AI, including a 10-week program for generative AI startups and the launch of Bedrock, a platform for building generative AI-powered applications.

Additionally, AWS has been collaborating with Nvidia to develop next-generation infrastructure for training AI models, supplementing its existing Trainium hardware.

The significant economic potential of generative AI is evident, with projections suggesting a potential addition of $4.4 trillion to the global economy annually.

As the AI industry continues to expand, reaching an estimated worth of $15.7 trillion by 2030, AWS’s strategic investment positions them to tap into this immense opportunity.

While challenges remain, such as meeting the demand for AI chips and ensuring enterprise security, AWS remains confident in its ability to deliver customer-centric solutions.

By prioritizing customer needs and leveraging its expertise, AWS aims to solidify its position as a leading generative AI services and support provider.

As the race for dominance in AI intensifies, Amazon’s substantial investment reaffirms its commitment to staying ahead of the curve and driving innovation in the ever-evolving field of generative AI.

AI-Generated Deepfakes Becoming Harder to Spot, Warns Secta Labs CEO

文章来源于互联网:AI-Generated Deepfakes Becoming Harder to Spot, Warns Secta Labs CEO

Artificial Intelligence (AI) is revolutionizing numerous sectors, but with the boon comes the bane. AI image generators are becoming more sophisticated, making the task of detecting deepfakes increasingly difficult.

This issue is causing alarm among global leaders and law enforcement agencies who are concerned about the impact of AI-generated deepfakes on social media and in conflict zones.

“We’re getting into an era where we can no longer believe what we see,” says Marko Jak, co-founder and CEO of Secta Labs. “Right now, it’s easier because the deepfakes are not that good yet, and sometimes you can see it’s obvious.”


Jak speculates that we are nearing a point—possibly within a year—where discerning a fake image at first glance will be impossible.

His insights should be taken seriously as he is the CEO of Secta Labs, an AI-image generator company.

The Rising Concerns about Deepfakes

A recent trend in AI-generated deepfakes has sparked outrage and concern. Deepfakes of murder victims have been appearing online, designed to evoke strong emotional reactions and gain clicks and likes.

This alarming trend emphasizes the urgency for more efficient ways to detect deepfakes.

Jak’s Austin-based startup, Secta Labs, which he co-founded in 2023, focuses on creating high-quality AI-generated images.

Secta Labs views its users as the owners of the AI models generated from their data, while the company serves as custodians creating images from these models.

The Call for AI Regulation

The potential misuse of advanced AI models has prompted world leaders to push for immediate action on AI regulation.

This has also led to companies like Meta, the creators of the new AI-generated voice platform Voicebox, deciding against releasing their advanced tools to the public.

“It’s also necessary to strike the right balance between openness with responsibility,” a Meta spokesperson shared.

Deepfakes: A Tool for Misinformation

Earlier this month, the U.S. Federal Bureau of Investigation warned of AI deepfake extortion scams and criminals using photos and videos from social media to create fake content.

In the face of the growing deepfake problem, Jak suggests that the solution may not lie solely in detecting deepfakes, but rather in exposing them.

“AI is the first way you could spot [a deepfake],” Jak said. “There are people building artificial intelligence that you can put an image into like a video, and the AI can tell you if it was generated by AI.”

Technology to Counter Deepfakes

Jak acknowledges that an “AI arms race” is emerging with bad actors creating more sophisticated deepfakes to counter the technology designed to detect them.

Jak proposes that technology such as blockchain and cryptography might offer a solution to the deepfake problem by authenticating an image’s origin.

He also suggests a low-tech solution — harnessing the collective wisdom of internet users.

“A tweet can be misinformation just like a deepfake can be,” he said. Jak believes that social media platforms could benefit from leveraging their communities to verify whether the circulated content is genuine.

As AI advances, the battle against deepfakes continues, underlining the importance of both technological and social solutions to counter this growing issue.

Mercedes to Add ChatGPT to its Infotainment System

文章来源于互联网:Mercedes to Add ChatGPT to its Infotainment System

Mercedes is set to revolutionize the way drivers and passengers interact with their cars.

The automaker announced plans to integrate ChatGPT, an advanced conversational AI developed by OpenAI, into its infotainment systems.

The integration is part of a beta program launching from June 16, 2023, giving Mercedes customers in the U.S. a chance to experience more engaging and personalized interactions with their vehicles.

A New Era of Interactive Driving

This innovative development allows Mercedes owners to upgrade their existing MBUX (Mercedes-Benz User Experience) systems with ChatGPT’s functionalities.

A simple voice command, “Hey Mercedes, I want to join the beta program,” will allow the users to enjoy this new addition, enhancing their in-car interactions.

ChatGPT is designed to mimic human-like conversation across diverse subjects. While its capabilities include content synthesis, code writing, and even creative tasks like crafting wedding vows, its role in the car environment remains to be explored fully.

Mercedes believes that ChatGPT’s conversational skills will add value to the driving experience.

“Users will experience a voice assistant that not only accepts natural voice commands but can also conduct conversations,” the automaker stated in a press release.

This feature can provide drivers with comprehensive answers to complex questions, assist with destination details, or even suggest new dinner recipes, all while ensuring their focus remains on the road.

While this integration promises to make car journeys more interesting and engaging, some concerns arise.

Are these wide-ranging functionalities necessary for drivers or passengers? What kind of interactions do users actually prefer while on the move? The answers to these questions are yet to unfold.

Mercedes’ choice to partner with ChatGPT, a third-party service, is a strategic decision to elevate its voice interface service.

However, with this enhancement comes the responsibility of managing user data.

Although the conversations between users and the voice interface are stored in the Mercedes-Benz Intelligent Cloud and anonymized, privacy concerns are still relevant.

Mercedes emphasizes that this data collection is crucial for understanding user behavior, shaping the rollout strategy, and improving the voice assistant across markets and languages.

While the feature’s practicality and privacy implications are subjects of ongoing discussion, one thing is clear: this beta program propels Mercedes towards a future where cars are not just machines, but conversational companions.

Time will tell how this new feature impacts the everyday driving experience of Mercedes owners.

High Demand for ChatGPT Experts: Companies Offering Salaries up to $185,000

文章来源于互联网:High Demand for ChatGPT Experts: Companies Offering Salaries up to $185,000

Companies across the globe are increasingly recognizing the value of artificial intelligence (AI) and are willing to pay handsome salaries to professionals proficient in AI tools like ChatGPT.

These companies are offering an average salary of Rs 1.5 crore, with some even offering twice that amount.

Since its launch in 2022, ChatGPT, an AI chatbot developed by OpenAI, has revolutionized the tech industry.

Known for its ability to generate human-like text responses, the bot has found applications in a multitude of areas such as essay writing, music composition, and even poetry crafting.

As a result, expertise in this AI tool has become a hot commodity, opening up several job opportunities.

AI Creating Jobs

According to a study by ResumeBuilder, 91% of companies with job vacancies are seeking candidates skilled in ChatGPT, underscoring the belief that AI has the potential to increase productivity, save time, and enhance overall company performance.

A report by Business Insider indicates that companies listed on LinkedIn are ready to offer annual salaries of up to USD 185,000 (approximately Rs 1.5 crore) to individuals proficient in ChatGPT.

HR company Recruiting from Scratch, based in the US, is currently hiring for the position of Senior Machine Learning Engineer, Audio, with job requirements including familiarity with AI tools and platforms like ChatGPT.

The salary for this role ranges from USD 125,000 to USD 185,000 per year.

Interface.ai, a conversational AI tool, is seeking a Remote Machine Engineer with experience in natural language processing and large language models like ChatGPT, offering a salary up to USD 170,000 per year.

Emergence of Prompt Engineering

Despite concerns about AI leading to job displacement, ChatGPT has actually led to the creation of new job roles.

One such emerging profession is Prompt Engineering, which is growing in popularity.

Earlier this year, San Francisco-based AI startup Anthropic posted a job advertisement for a Prompt Engineer and a Librarian, offering a salary of up to USD 335,000 per year (approximately Rs 2.7 crore).

This demand for AI Prompt Engineering roles extends beyond San Francisco, with numerous job openings for prompt engineers found on platforms like LinkedIn and other job search websites.

Online platforms have also begun to offer courses focused on Prompt Engineering to cater to this growing demand.

DeepMind’s New AI Project, Gemini, Aims to Surpass OpenAI’s ChatGPT

文章来源于互联网:DeepMind’s New AI Project, Gemini, Aims to Surpass OpenAI’s ChatGPT

Google’s DeepMind, the firm behind the historic victory of artificial intelligence (AI) over human intelligence in the complex board game Go, is gearing up for another breakthrough.

Demis Hassabis, the CEO of DeepMind, has revealed that they are developing a more powerful AI language model named Gemini, intended to surpass OpenAI’s ChatGPT in capabilities.

In 2016, DeepMind’s AI program, AlphaGo, astounded the world by defeating a world champion Go player.

Gemini, The Fusion of Advanced Techniques

Hassabis explained that the company is planning to leverage the techniques used in the creation of AlphaGo for the development of Gemini. Gemini, similar to GPT-4, is a large language model that works with text.

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models.”

Hassabi

The goal is to merge these technologies to equip Gemini with advanced features like problem-solving and planning.

The foundation of AlphaGo’s prowess was reinforcement learning, a method DeepMind has perfected.

The approach involves software learning to solve intricate tasks that require strategic decision-making through trial and error and performance feedback. AlphaGo also utilized a method called tree search to investigate and remember possible game moves.

The intent is to have Gemini imbibe similar methods and push the boundaries of language models, possibly even performing tasks on the internet and computers.

Hassabis indicated that the development of Gemini is a time-intensive process that might take several months and could involve significant funding.

Gemini’s Anticipated Capabilities

The completion of Gemini could significantly bolster Google’s response to the competition from ChatGPT and other AI technologies.

DeepMind was acquired by Google in 2014 after showcasing impressive results from software that employed reinforcement learning to master simple video games.

It demonstrated how AI can perform tasks once believed to be exclusive to human capabilities, often with superior proficiency.

DeepMind’s reinforcement learning expertise could potentially provide Gemini with unique abilities.

However, the journey towards AI advancements is fraught with risks and uncertainties. The rapid development in language models has raised concerns about misuse of technology and control issues.

Despite these concerns, Hassabis emphasized the extraordinary potential benefits of AI in areas such as healthcare and climate science, making it crucial not to halt its development.

Hassabis and his team, while acknowledging the challenges and potential risks, continue to forge ahead, focused on the development of more powerful and innovative AI models like Gemini.

The future of AI seems promising, yet the path ahead remains one of exploration and learning.