Polygon Introduces AI Chatbot Assistant, Polygon Copilot

文章来源于互联网:Polygon Introduces AI Chatbot Assistant, Polygon Copilot

Polygon, a well-known developer that provides Ethereum scaling solutions, has leaped into the future of Web3. They have introduced an AI chatbot assistant, Polygon Copilot, to their platform.

What is Polygon Copilot?

Imagine a personal guide that can help you navigate the expansive ecosystem of decentralized applications (dApps) on Polygon.


Polygon Copilot is just that! It’s an AI assistant that can answer your questions and provide information about the Polygon platform.

It comes with three different user levels: Beginner, Advanced, and Degen, each designed for users at different stages of familiarity with the ecosystem.

The assistant is built on OpenAI’s GPT-3.5 and GPT-4 models and is incorporated into the user interface of Polygon.

One of the main goals of the Copilot is to offer insights, analytics, and guidance based on the Polygon protocol documentation.

A standout feature of Polygon Copilot is its commitment to transparency. It discloses the sources of the information it gives, which enables users to verify the information and explore the topic further.

Polygon’s step towards integrating AI technology is part of a growing trend in the Web3 world.

Other companies including Alchemy, Solana Labs, and Etherscan are also harnessing the potential of AI.

Using Polygon Copilot

To start with Polygon Copilot, users need to connect a wallet that will serve as the user account.

This account is given credits for asking questions, with new credits added every 24 hours.

And what sets Polygon Copilot apart? It’s not just any plain-speaking AI; it has a flair of its own. Ask it about the top NFT project on Polygon, and you’ll get a response full of personality.

However, it’s essential to remember that like all AI technology, Polygon Copilot isn’t perfect.

Users are cautioned that the AI may provide inaccurate information and to take the chatbot’s answers with a grain of salt.

Polygon has set limits on the number of responses the chatbot can generate to prevent spamming and overload.

What’s Polygon All About?

Polygon presents itself as ‘Ethereum 2.0’, addressing scalability issues within the Ethereum blockchain.

It enhances the value of any applications built on the Ethereum blockchain.

The introduction of the AI assistant is a leap forward for the platform. Whether you are a beginner looking for basic guidance or an advanced user trying to build complex products, Polygon Copilot is there to assist.

It’s also handy for analysts seeking accurate data about NFTs and dApps.

Web3 and the Promise of Data Ownership

Polygon’s use of AI reflects the evolution of the internet, known as Web 3.0. This version of the internet promises safety, transparency, and control over the data created by users.

Web 3.0 operates on blockchain technology, a decentralized system that removes corporate access to private data.

Blockchains were born alongside Bitcoin, the first cryptocurrency, aiming to break free from corporations’ control over our data.

In the spirit of Web 3.0, platforms like Polygon allow users to control access to their data and attach value to it, enhancing data ownership.

As the tech world moves forward, innovations like Polygon Copilot highlight the growing intersection between artificial intelligence and blockchain technology, redefining user experience in the process.

Spotting AI-Written Text Gets Easier with New Research

文章来源于互联网:Spotting AI-Written Text Gets Easier with New Research

Researchers have found a new method to determine whether a piece of text was penned by a human or an artificial intelligence (AI).

This new detection technique leverages a model named RoBERTa, which helps to analyze the structure of text.

Finding the Differences

The study revealed that the text produced by AI systems, such as ChatGPT and Davinci, displays different patterns compared to human text.

When these texts were visualized as points in a multi-dimensional space, it was found that the points representing AI-written text occupied a lesser area than the points representing human-written text.

Using this key difference, researchers designed a tool that can resist common tactics employed to camouflage AI-written text.

The performance of this tool remained impressive even when it was tested with various types of text and AI models, showing high accuracy.

However, its accuracy decreased when the tool was tested with a sophisticated hiding method called DIPPER.

Despite this, it still performed better than other available detectors.

One of the exciting aspects of this tool is its capability to work with languages other than English. The research showed that while the pattern of text points varied across languages, AI-written text consistently occupied a lesser space than human-written text in every specific language.

Looking Ahead

While the researchers acknowledged that the tool faces difficulties when dealing with certain types of AI-generated text, they remain optimistic about potential enhancements in the future.

They also suggested exploring other models, similar to RoBERTa, for understanding the structure of text.

Earlier this year, OpenAI introduced a tool designed to distinguish between human and AI-generated text.

Although this tool provides valuable assistance, it is not flawless and can sometimes misjudge. The developers have made this tool publicly available for free to receive feedback and make necessary improvements.

These developments underscore the ongoing endeavors in the tech world to tackle the challenges posed by AI-generated content. Tools like these are expected to play a crucial role in battling misinformation campaigns and mitigating other harmful effects of AI-generated content.

OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

文章来源于互联网:OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

AI chatbot, ChatGPT, a creation of OpenAI, is under scrutiny due to its frequent inability to distinguish fact from fiction, leaving users often led astray by the information it provides.

The Warning Sign Often Ignored

OpenAI has highlighted on its homepage one of the many limitations of ChatGPT – it may sometimes provide incorrect information.

Although this warning holds true for several information sources, it brings to light a concerning trend. Users often disregard this caveat, assuming the data provided by ChatGPT to be factual.

Unreliable Legal Aid, The Case of Steven A. Schwartz

The misleading nature of ChatGPT came into stark focus when US lawyer Steven A.

Schwartz turned to the chatbot for case references in a lawsuit against Colombian airline Avianca. In a turn of events, all the cases the AI suggested turned out to be non-existent.

Despite Schwartz’s concerns about the veracity of the information, the AI reassured him of its authenticity.

Such instances raise questions about the chatbot’s reliability.

A Misunderstood Reliable Source?

The frequency with which users treat ChatGPT as a credible source of information calls for a wider recognition of its limitations.

Over the past few months, there have been several reports of people being misled by its fallacies, which have been largely inconsequential but nonetheless worrying.

One concerning instance involved a Texas A&M professor who used ChatGPT to verify if students’ essays were AI-generated.

ChatGPT confirmed, incorrectly, that they were, leading to the threat of failing the entire class. This incident underscores the risk of the misinformation that ChatGPT can spread, potentially leading to more serious consequences.

Cases like these do not entirely discredit the potential of ChatGPT and other AI chatbots. In fact, these tools, under the right conditions and with adequate safeguards, could be exceptionally useful.

However, it’s crucial to realize that at present, their capabilities are not entirely reliable.

The Role of the Media and OpenAI

The media and OpenAI bear some responsibility for this issue.

Media often portrays these systems as emotionally intelligent entities, failing to emphasize their unreliability. Similarly, OpenAI could do more to warn users of the potential misinformation that ChatGPT can provide.

Recognizing ChatGPT as a Search Engine

The tendency of users to utilize ChatGPT as a search engine should be acknowledged by OpenAI, leading them to provide clear and upfront warnings.

Chatbots present information in a regenerated text format and a friendly, all-knowing tone, making it easy for users to assume the information is accurate.

This pattern reinforces the need for stronger disclaimers and cautionary measures from OpenAI.

The Path Forward

OpenAI needs to implement changes to reduce the likelihood of users being misled.

This could include programming ChatGPT to caution users to verify its sources when asked for factual citations, or making it clear when it is incapable of making a judgment.

OpenAI has indeed made improvements, making ChatGPT more transparent about its limitations.

However, inconsistencies persist and call for more action to ensure that users are fully aware of the potential for error and misinformation.

Without such measures, a simple disclaimer like “May occasionally generate incorrect information” seems significantly inadequate.

AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

文章来源于互联网:AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

A recent study conducted by researchers from Stanford University concludes that current large language models (LLMs) such as OpenAI’s GPT-4 and Google’s Bard are failing to meet the compliance standards set by the European Union (EU) Artificial Intelligence (AI) Act.

Understanding the EU AI Act

The EU AI Act, the first of its kind to regulate AI on a national and regional scale, was recently adopted by the European Parliament.

It not only oversees AI within the EU, a region housing 450 million people but also sets the precedent for AI regulations globally.

However, as per the Stanford study, AI companies have a considerable distance to cover to attain compliance.

Compliance Analysis of AI Providers

In their study, the researchers evaluated ten major model providers against the 12 requirements of the AI Act, scoring each provider on a 0 to 4 scale.

Stanford’s report says:

“We present the final scores in the above figure with the justification for every grade made available. Our results demonstrate a striking range in compliance across model providers: some providers score less than 25% (AI21 Labs, Aleph Alpha, Anthropic) and only one provider scores at least 75% (Hugging Face/BigScience) at present. Even for the highest-scoring providers, there is still significant margin for improvement. This confirms that the Act (if enacted, obeyed, and enforced) would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.”

The findings displayed a significant variation in compliance levels, with some providers scoring below 25%, and only Hugging Face/BigScience scoring above 75%.

This suggests a considerable scope for improvement even for high-scoring providers.

The Problem Areas

Do Foundation Model Providers Comply with the Draft EU AI Act? Problem Areas

The researchers highlighted key areas of non-compliance, including a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions, and risk mitigation methodology.

They also observed a clear difference between open and closed model releases, with open releases providing better disclosure of resources but posing bigger challenges in controlling deployment.

The study concludes that all providers, regardless of their release strategy, have room for improvements.

A Reduction in Transparency

In recent times, major model releases have seen a decline in transparency.

OpenAI, for instance, chose not to disclose any data and compute details in their reports for GPT-4, citing competitive landscape and safety implications.

Potential Impact of the EU AI Regulations

The Stanford researchers believe that the enforcement of the EU AI Act could significantly influence the AI industry.

The Act emphasises the need for transparency and accountability, encouraging large foundation model providers to adapt to new standards.

However, the swift adaptation and evolution of business practices to meet regulatory requirements remain a major challenge for AI providers.

Despite this, the researchers suggest that with robust regulatory pressure, providers could achieve higher compliance scores through meaningful yet feasible changes.

The Future of AI Regulation

The study offers an insightful perspective on the future of AI regulation.

The researchers assert that if properly enforced, the AI Act could substantially impact the AI ecosystem, promoting transparency and accountability.

As we stand on the threshold of regulating this transformative technology, the study emphasises the importance of transparency as a fundamental requirement for responsible AI deployment.

Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

文章来源于互联网:Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

Meta, a leading name in the tech industry, has made a significant leap in artificial intelligence (AI) by developing Voicebox, an advanced tool capable of generating lifelike speech.

Despite the tool’s potential, the company has chosen not to release it immediately due to concerns about potential misuse.

Voicebox

Announced last Friday, Voicebox can create convincing voice dialogue, opening up a range of possibilities, from enhancing communication across languages to delivering lifelike character dialogue in video games.

Unique in its functionality, Voicebox can generate speech it wasn’t specifically trained for.

All it requires is some text input and a small audio clip, which it then uses to create a whole new speech in the voice of the source audio.

In a breakthrough from traditional AI speech tools, Voicebox learns directly from raw audio and its corresponding transcription, eliminating the need for task-specific training with carefully curated datasets.


Moreover, this impressive tool can produce audio in six languages – English, French, German, Spanish, Polish, and Portuguese – offering a realistic representation of natural human speech.

Potential Misuse and Meta’s Precautionary Approach

While Voicebox opens up exciting possibilities, Meta is fully aware of the potential misuse of such a tool.

The AI tool could be misused to create ‘deepfake’ dialogues, replicating the voices of public figures or celebrities in an unethical manner.

To counter this risk, Meta has developed AI classifiers, akin to spam filters, that can differentiate between human speech and speech generated by ‘Voicebox’.

The company is advocating for transparency in AI development, coupled with a firm commitment to responsible use. As part of this commitment, Meta has no current plans to make ‘Voicebox’ publicly available, emphasizing the need to balance openness with responsibility.

Instead of launching a functional tool, Meta is offering audio samples and a research paper to help researchers understand its potential and work towards responsible use.

Global Concerns Over AI Misuse

The rapid advancements in AI are causing concern among global leaders, including the United Nations (UN).

Deepfakes have been utilized in scams and have propagated hate and misinformation online, as highlighted in a recent UN report.

Creating AI tools like ‘Voicebox’ offers numerous possibilities but underscores the importance of cautious development and responsible use to prevent misuse.

As we continue to stride forward in the field of AI, these concerns will remain paramount.

OpenAI Launches Web Crawler GPTBot for Data Collection

文章来源于互联网:OpenAI Launches Web Crawler GPTBot for Data Collection

By Mukund Kapoor

OpenAI, a leading name in the AI industry, has unveiled its new web crawling bot, GPTBot, to broaden the dataset for training future AI systems, possibly including the next version named “GPT-5,” as indicated by a recent trademark application.

Gathering Public Data

The newly released GPTBot will collect publicly accessible data from websites while steering clear of paywalled, sensitive, and prohibited content.

This web crawler functions similarly to those of search engines like Google and Bing, assuming that accessible information is fair for use.

To block the OpenAI web crawler from accessing a site, the owner must add a “disallow” rule to a file on their server.

This is how you can block ChatGPT Crawlers from your site

OpenAI has assured that GPTBot will scan the scraped data to eliminate any personally identifiable information (PII) or text that contradicts its policies.

However, the opt-out approach is generating ethical concerns around consent. Critics argue that OpenAI’s actions might lead to derivative work without proper citation.

Addressing Past Controversies

The launch follows prior criticism where OpenAI was accused of scraping data without permission for training its Large Language Models (LLMs) like ChatGPT.

In response, OpenAI updated its privacy policies in April.

The new web crawler represents OpenAI’s need for more current data to maintain and enhance its LLMs.

The move may indicate a shift from OpenAI’s initial focus on transparency and safety, understandable as ChatGPT remains the most used LLM globally.

OpenAI’s products heavily rely on the quality of data used for training, and the GPTBot aims to gather that essential data.

Competition in the AI Space

Meta, the social media titan, has also been working on AI, offering its model for free unless used by competitors or large businesses.

While OpenAI’s strategy revolves around using crawled data for profitable AI tool ecosystems, Meta aims to build a profitable business around its data.

OpenAI’s ChatGPT currently boasts over 1.5 billion monthly active users, and Microsoft’s $10 billion investment in OpenAI is paying off, as ChatGPT integration has enhanced Bing’s capabilities.

As OpenAI’s GPTBot represents an advancement in AI capabilities, it also reopens copyright, consent, and ethics debates.

As AI systems become more advanced, striking the right balance between transparency, ethics, and technological capability will continue to challenge industry leaders.

The new web crawler’s launch highlights the complexities of innovation in the AI space, where benefits in efficiency and ability may come with potential ethical trade-offs.

FTX Pauses Sale of $500 Million Stake in AI Company Anthropic Amidst Bankruptcy

文章来源于互联网:FTX Pauses Sale of $500 Million Stake in AI Company Anthropic Amidst Bankruptcy

Embattled cryptocurrency exchange FTX has abruptly stopped the sale of its stake in AI firm Anthropic, valued over $500 million, as reported by Bloomberg on Tuesday.

FTX had filed for Chapter 11 bankruptcy protection last November, followed by allegations against co-founder Sam Bankman-Fried for money laundering, fraud, and conspiracy to commit wire fraud.

Court documents filed recently alleged that Bankman-Fried and other high-ranking executives at FTX were involved in commingling over $402 million in customer funds.

The collapsed exchange is reported to owe customers around $8.7 billion, with $6.4 billion in the form of fiat currency and stablecoins.

FTX’s Efforts to Repay Creditors

In January, FTX was given the green light by a federal judge to sell off some assets to repay creditors.

Among the assets FTX has offloaded is derivatives trading platform LedgerX, sold for $50 million – a stark loss compared to the $300 million FTX paid for LedgerX in 2021.

Earlier this month, FTX, with the assistance of financial services firm Perella Weinberg Partners, had shown intentions to offload its shares in Anthropic as part of its clawback strategy to repay creditors.

A clawback in bankruptcy is a legal process where a bankruptcy trustee retrieves property or payments made by the company prior to filing for bankruptcy.

Anthropic’s Success Story

Founded in 2021 by former OpenAI employees, Anthropic has become a front-runner in the current AI boom.

The firm launched Claude AI in March after receiving a $400 million investment from Google earlier this year, supplemented by another $450 million in Series C funding in May, led by Spark Capital.

In May, Anthropic made significant advancements in Claude AI, including the development of a rule-set based on the Universal Declaration of Human Rights, designed to promote ethical behavior and discourage undesirable actions.

A Pause on the Sale

The unexpected halt in the sale of Anthropic shares came after several potential buyers had evaluated private information about the stake.

Anthropic, which is privately-held, is currently valued at $4.6 billion, according to Semafor’s June report.

Buyers in the secondary market for shares in private companies have been eager to acquire stakes in Anthropic, even at a premium, indicating the significant potential of this AI startup.

FTX’s stake in Anthropic, amounting to a $500 million investment made by FTX and Alameda, is among FTX’s most coveted assets.

While the reason for this sudden halt remains undisclosed, it marks another twist in FTX’s ongoing legal and financial turmoil.

DeepMind’s New AI Project, Gemini, Aims to Surpass OpenAI’s ChatGPT

文章来源于互联网:DeepMind’s New AI Project, Gemini, Aims to Surpass OpenAI’s ChatGPT

Google’s DeepMind, the firm behind the historic victory of artificial intelligence (AI) over human intelligence in the complex board game Go, is gearing up for another breakthrough.

Demis Hassabis, the CEO of DeepMind, has revealed that they are developing a more powerful AI language model named Gemini, intended to surpass OpenAI’s ChatGPT in capabilities.

In 2016, DeepMind’s AI program, AlphaGo, astounded the world by defeating a world champion Go player.

Gemini, The Fusion of Advanced Techniques

Hassabis explained that the company is planning to leverage the techniques used in the creation of AlphaGo for the development of Gemini. Gemini, similar to GPT-4, is a large language model that works with text.

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models.”

Hassabi

The goal is to merge these technologies to equip Gemini with advanced features like problem-solving and planning.

The foundation of AlphaGo’s prowess was reinforcement learning, a method DeepMind has perfected.

The approach involves software learning to solve intricate tasks that require strategic decision-making through trial and error and performance feedback. AlphaGo also utilized a method called tree search to investigate and remember possible game moves.

The intent is to have Gemini imbibe similar methods and push the boundaries of language models, possibly even performing tasks on the internet and computers.

Hassabis indicated that the development of Gemini is a time-intensive process that might take several months and could involve significant funding.

Gemini’s Anticipated Capabilities

The completion of Gemini could significantly bolster Google’s response to the competition from ChatGPT and other AI technologies.

DeepMind was acquired by Google in 2014 after showcasing impressive results from software that employed reinforcement learning to master simple video games.

It demonstrated how AI can perform tasks once believed to be exclusive to human capabilities, often with superior proficiency.

DeepMind’s reinforcement learning expertise could potentially provide Gemini with unique abilities.

However, the journey towards AI advancements is fraught with risks and uncertainties. The rapid development in language models has raised concerns about misuse of technology and control issues.

Despite these concerns, Hassabis emphasized the extraordinary potential benefits of AI in areas such as healthcare and climate science, making it crucial not to halt its development.

Hassabis and his team, while acknowledging the challenges and potential risks, continue to forge ahead, focused on the development of more powerful and innovative AI models like Gemini.

The future of AI seems promising, yet the path ahead remains one of exploration and learning.

High Demand for ChatGPT Experts: Companies Offering Salaries up to $185,000

文章来源于互联网:High Demand for ChatGPT Experts: Companies Offering Salaries up to $185,000

Companies across the globe are increasingly recognizing the value of artificial intelligence (AI) and are willing to pay handsome salaries to professionals proficient in AI tools like ChatGPT.

These companies are offering an average salary of Rs 1.5 crore, with some even offering twice that amount.

Since its launch in 2022, ChatGPT, an AI chatbot developed by OpenAI, has revolutionized the tech industry.

Known for its ability to generate human-like text responses, the bot has found applications in a multitude of areas such as essay writing, music composition, and even poetry crafting.

As a result, expertise in this AI tool has become a hot commodity, opening up several job opportunities.

AI Creating Jobs

According to a study by ResumeBuilder, 91% of companies with job vacancies are seeking candidates skilled in ChatGPT, underscoring the belief that AI has the potential to increase productivity, save time, and enhance overall company performance.

A report by Business Insider indicates that companies listed on LinkedIn are ready to offer annual salaries of up to USD 185,000 (approximately Rs 1.5 crore) to individuals proficient in ChatGPT.

HR company Recruiting from Scratch, based in the US, is currently hiring for the position of Senior Machine Learning Engineer, Audio, with job requirements including familiarity with AI tools and platforms like ChatGPT.

The salary for this role ranges from USD 125,000 to USD 185,000 per year.

Interface.ai, a conversational AI tool, is seeking a Remote Machine Engineer with experience in natural language processing and large language models like ChatGPT, offering a salary up to USD 170,000 per year.

Emergence of Prompt Engineering

Despite concerns about AI leading to job displacement, ChatGPT has actually led to the creation of new job roles.

One such emerging profession is Prompt Engineering, which is growing in popularity.

Earlier this year, San Francisco-based AI startup Anthropic posted a job advertisement for a Prompt Engineer and a Librarian, offering a salary of up to USD 335,000 per year (approximately Rs 2.7 crore).

This demand for AI Prompt Engineering roles extends beyond San Francisco, with numerous job openings for prompt engineers found on platforms like LinkedIn and other job search websites.

Online platforms have also begun to offer courses focused on Prompt Engineering to cater to this growing demand.

Mercedes to Add ChatGPT to its Infotainment System

文章来源于互联网:Mercedes to Add ChatGPT to its Infotainment System

Mercedes is set to revolutionize the way drivers and passengers interact with their cars.

The automaker announced plans to integrate ChatGPT, an advanced conversational AI developed by OpenAI, into its infotainment systems.

The integration is part of a beta program launching from June 16, 2023, giving Mercedes customers in the U.S. a chance to experience more engaging and personalized interactions with their vehicles.

A New Era of Interactive Driving

This innovative development allows Mercedes owners to upgrade their existing MBUX (Mercedes-Benz User Experience) systems with ChatGPT’s functionalities.

A simple voice command, “Hey Mercedes, I want to join the beta program,” will allow the users to enjoy this new addition, enhancing their in-car interactions.

ChatGPT is designed to mimic human-like conversation across diverse subjects. While its capabilities include content synthesis, code writing, and even creative tasks like crafting wedding vows, its role in the car environment remains to be explored fully.

Mercedes believes that ChatGPT’s conversational skills will add value to the driving experience.

“Users will experience a voice assistant that not only accepts natural voice commands but can also conduct conversations,” the automaker stated in a press release.

This feature can provide drivers with comprehensive answers to complex questions, assist with destination details, or even suggest new dinner recipes, all while ensuring their focus remains on the road.

While this integration promises to make car journeys more interesting and engaging, some concerns arise.

Are these wide-ranging functionalities necessary for drivers or passengers? What kind of interactions do users actually prefer while on the move? The answers to these questions are yet to unfold.

Mercedes’ choice to partner with ChatGPT, a third-party service, is a strategic decision to elevate its voice interface service.

However, with this enhancement comes the responsibility of managing user data.

Although the conversations between users and the voice interface are stored in the Mercedes-Benz Intelligent Cloud and anonymized, privacy concerns are still relevant.

Mercedes emphasizes that this data collection is crucial for understanding user behavior, shaping the rollout strategy, and improving the voice assistant across markets and languages.

While the feature’s practicality and privacy implications are subjects of ongoing discussion, one thing is clear: this beta program propels Mercedes towards a future where cars are not just machines, but conversational companions.

Time will tell how this new feature impacts the everyday driving experience of Mercedes owners.