铲屎官晒出狗子成长照,网友:这猴子居然还活着?

文章来源于互联网:铲屎官晒出狗子成长照,网友:这猴子居然还活着?

petssky
10575
文章
49
评论

2023-07-26

评论2,682 277字阅读0分55秒

网友家的雪橇犬可可,小时候很是呆萌可爱,抱着只比自己小一号的猴子玩偶睡觉的画面,也是很治愈了…

而一年后,铲屎官拍了同款照片,本意是想感叹,狗子这简直就像是吹气球般膨胀大的!

但网友注意力却跑偏了:

猴子好强啊,竟然还活着!!!

不过看着猴子玩偶这炸毛的样子,就知道它这一年过得也不容易…

铲屎官晒出狗子成长照,网友:这猴子居然还活着?

铲屎官也表示猴子是家里最长寿的玩偶了,就是不知道它还能经受多少摧残…

毕竟曾经那个少年!

已经变成了小霸王!

其实狗狗和部分人类一样,会对童年时期的某个物品产生情感依赖,所以可可大概率不会伤害猴子玩偶…

但玩疯了之后,就不一定了,建议铲屎官还是先备个同款,以防万一!

欢迎关注
喜欢本网站,就扫一扫,关注微信公众号,有惊喜哦~

牧羊犬任劳任怨当大猫保姆,没想大猫长大却成了它最大的靠山!

文章来源于互联网:牧羊犬任劳任怨当大猫保姆,没想大猫长大却成了它最大的靠山!

petssky
10575
文章
49
评论

2023-07-26

评论2,479 299字阅读0分59秒

南非的一位猎场管理员Ashley Gombert

他从湖里救起了几只落难的小老虎

却发现他养的牧羊犬Solo对这些老虎非常友善

而且其中有一只小老虎特别喜欢跟着Solo

完全就把牠当成是老大一样在崇拜!

小老虎一开始被救起时还非常虚弱

Solo就会非常贴心的照顾它

它们的友谊就这样慢慢的培养起来了

主人要进城里时它们也会一起跳上吉普车

根本就没有人能把它们分开!

不过身为老大当然也要发挥大哥的威严

毕竟牧羊犬的天性还是在的

虽然这里没有羊让它管理,那就改牧老虎吧!

牧羊犬任劳任怨当大猫保姆,没想大猫长大却成了它最大的靠山!

时间过得很快

不知不觉中小老虎也渐渐长大

牧羊犬任劳任怨当大猫保姆,没想大猫长大却成了它最大的靠山!

后来小老虎也都长得比Solo还要大只了

但它们的感情依旧不变

一看就知道老大的地位也没变

有这样硬的后台,太牛了……

欢迎关注
喜欢本网站,就扫一扫,关注微信公众号,有惊喜哦~

谁都无法替代你:爸爸出门后,它一直在阳台等待

文章来源于互联网:谁都无法替代你:爸爸出门后,它一直在阳台等待

petssky
10575
文章
49
评论

2023-07-26来源:狗狗猫咪宠物控

评论3,210 431字阅读1分26秒

和主人一起生活在秘鲁的Toby,

是一个患有分离焦虑症的狗子。

谁都无法替代你:爸爸出门后,它一直在阳台等待

每天早上,

当它在阳台上目送男主人离开后,

它会一直坐在那儿等着。

任何人和事都不能引起它的兴趣,

它只是固执的等待着。

然后确认主人不会回来,

表情越来越悲伤,

时不时还会发出几声悲伤的嚎叫。

是不是狗子自己在家太孤单了呢?

其实,它有一个玩得很好的同类小伙伴,

而且家里也有其他主人在,

但它只想要它的“爸爸”。

即使家里人陪伴它,

它也是一直等着爸爸回家。

有时候实在是困得不行,

它也会蜷缩在阳台上休息,

但它从来不肯直接睡觉。

直到主人回到家后,

它确认主人就在身边后,

才会回到自己的窝里,

安心地进入梦乡。

知道爱犬一直在等自己,

主人也总是努力能早点回家,

平时要是出去玩,

也总是会带上Toby一起。

而只要在主人的身边,

Toby就能敞开心扉,

做一个欢乐的毛孩子。

脸上也一扫阴霾,

绽放大大的笑容。

爸爸在身边,

和小同伴在海滩上也玩耍得很开心!

依偎在爸爸身上,

就是最幸福的日子。

谁都无法替代你:爸爸出门后,它一直在阳台等待

有主人陪伴时,它活泼又快乐,

爱玩爱闹爱笑,仿佛没有任何烦恼。

欢迎关注
喜欢本网站,就扫一扫,关注微信公众号,有惊喜哦~

目前状态良好!公园内孔雀被宠物狗咬事件后续

文章来源于互联网:目前状态良好!公园内孔雀被宠物狗咬事件后续

petssky
10580
文章
49
评论

2023-08-03

评论1,764 378字阅读1分15秒

日前,网络流传一则

宠物狗追咬笼中孔雀的视频,

引发广大网友的关注。

视频显示,

一只未拴绳的宠物狗钻进笼子里,

疯狂地追咬白孔雀,

笼中的孔雀被吓得四处逃窜。

据了解,

事发点为百色沙滩休闲公园。

8月2日,记者向百色沙滩休闲公园相关负责人了解情况。“事发当天,狗主人在第一时间与我们取得联系,并且认错态度诚恳,经双方友好协商后,已达成和解协议。”百色沙滩休闲公园相关负责人告诉记者,7月31日中午,一女子在公园萌宠小乐园区附近遛狗,因未拴绳,宠物狗钻进笼子追咬孔雀。当时工作人员及时赶到现场,狗主人也进到笼子里把狗赶了出来。

事发后,工作人员已对孔雀笼进行改造,在笼子的底部围了一层厚实的铁丝网。

“目前,孔雀已经被公园工作人员运送到安全区域救治。”公园相关负责人表示,事发后,公园立即联系了兽医到现场对孔雀进行诊断和救治。目前,受伤孔雀的伤口已经得到妥善的处理,状态良好。

欢迎关注
喜欢本网站,就扫一扫,关注微信公众号,有惊喜哦~

借合肥宠物经济东风,巢湖希德罗宠物大健康产业园建成投产

文章来源于互联网:借合肥宠物经济东风,巢湖希德罗宠物大健康产业园建成投产

petssky
10580
文章
49
评论

2023-07-31

评论4,064 1891字阅读6分18秒

近日,希德罗(合肥)宠物用品有限公司向安徽省农业农村厅申报添加剂预混合饲料生产许可,涉及固态、半固态及液态添加剂预混合饲料三大类,为国内少数全品类宠物添加剂预混合饲料生产企业。生产许可申请通过现场验收并2023年7月29日在安徽省农业农村厅网站公示,标志着希德罗宠物大健康产业园项目投产运营进入实质性投产阶段,希德罗宠物大健康产业园必将成为合肥宠物经济高质量发展又一主要策源地。

项目得到了来自巢湖市委、市政府全力支持,项目所在地巢湖市中垾镇更是成立项目专班确保产业园建设快速推进,在市农业农村局、市投促中心、市自规局、市住建局、市经信局、市环保局、市交通局等部门关怀下希德罗宠物大健康产业园一期工程顺利建设完成、具备投产条件。

2023年5月5日上午,省委常委、市委书记虞爱华主持召开市委常委会扩大会议,深入学习贯彻4月28日中央政治局会议精神,分析总结今年以来全市经济运行情况,部署下一步重点工作。提出专班推进宠物经济等新经济业态发展,精准提供政策支持、法律服务和舆论保障,不断开辟新赛道、塑造新优势。

希德罗宠物大健康产业园剑指宠物健康市场的变革,共同助力宠物大健康产业在江淮地区的持续高质量发展,积极响应合肥市发展宠物经济的政策号召。截止发稿前巢湖希德罗宠物大健康产业园已入驻企业有:安徽希德罗生物科技有限公司、希德罗(合肥)宠物用品有限公司、希德罗动物药业(合肥)有限公司、合肥贝利亚宠物医疗管理有限公司、希德罗宠物大健康研究院等相关企业。产业涉及宠物大健康领域宠物营养临床研究、宠物药物及诊断试剂盒研发、宠物食品生产制造、宠物营养补充剂生产制造和宠物医疗等。

入驻企业安徽希德罗生物科技有限公司已与安徽国际商务职业学院共同签约、成立新媒体产教融合中心,与安徽科技学院动物科学学院、池州职业技术学院生物工程系等相关高校开展产学研合作。同时,成立希德罗(合肥)宠物大健康研究院,着力提高科技创新驱动能力,积极为宠物大健康领域的宠物医疗和宠物营养产业发展赋能。

入驻企业希德罗(合肥)宠物用品有限公司兽用(宠物)诊断试剂盒及宠物食品项目一期已开始具备部分投产条件,年产2000万瓶/盒宠物营养补充剂车间、2条万吨高端宠物主粮生产线,将在2023年8月至10月陆续投产。投产达产后,产值不低于10亿元/年。

据悉,希德罗宠物大健康产业园项目一期建设宠物大健康研究院、宠物用品直播电商运营中心、宠物食品生产基地及兽用(宠物)诊断试剂盒等;二期项目建设聚焦宠物产业上下游配套生产制造基地,聚焦智能宠物用品、宠物玩具、宠物药品等。预计2025年前后产业园建设全部完成及达产,可实现年产值50亿元。希德罗宠物大健康产业园将极大带动宠物智能玩具、宠物家居用品、宠物食品及宠物医疗等相关产业协同发展,着力打造合肥地区宠物经济发展的重要策源地。

作为合肥宠物经济创新与产学研的转化园区,希德罗宠物大健康产业园将聚焦宠物营养科研技术、宠物医药研发、宠物食品及保健产品研发等产业链条,不断拓宽业务面和业务深度。推动科技成果转化,为宠物健康产业“延链”“强链”。

希德罗宠物大健康产业园负责人谢翔先生2022年疫情期间敏锐察觉宠物经济大有可为,果断投资1.8亿元在合肥巢湖市开建希德罗宠物大健康产业园项目。谢翔先生有安徽农业大学畜牧兽医专业学科背景,是民建安徽省十届农业与农村委员会副主任,他本人对市场和相关政策敏锐度极高,他表示,近期合肥市提出要在合肥打造2-3家宠物产业园,努力建成千亿级别产业园,要引进培育头部企业引领全市宠物经济更好发展。相信合肥市未来会有更多相关助力产业发展的政策推出和落地,企业在干事业的过程中就更有信心和底气。合肥宠物经济的快速发展是极具想象空间和快速高质量发展潜力的,企业只要踏实肯干把握政策和市场导向就一定会取得大成绩、大发展。

5月13日,市农业农村局围绕宠物经济发展,深入宠物饲料、宠物药品、宠物医疗器械等生产企业开展宠物经济专题调研。刘正义强调,要立足合肥宠物产业发展基础,做好“规范+规划”文章,进一步做好行业调研和政策研究,推动合肥宠物产业高质高效发展。

7月12日下午,市农业农村局召开宠物经济工作座谈会,会上提出目标,要在合肥打造2-3家宠物产业园,努力建成千亿级别产业园,要引进培育头部企业引领全市宠物经济更好发展。

未来,希德罗宠物大健康产业园还将在合肥市推进宠物经济战略的引导下,依托于企业雄厚的专业研发、将吸引更多合作企业的不断加入。立足巢湖市夯实希德罗宠物大健康产业园建设,做强产业链,一定能为合肥宠物经济产业大发展创造无限价值,成为国内重要的宠物经济发展一极。

欢迎关注
喜欢本网站,就扫一扫,关注微信公众号,有惊喜哦~

Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

文章来源于互联网:Meta Develops AI Speech Tool Voicebox, Holds Off Release Due to Misuse Concerns

Meta, a leading name in the tech industry, has made a significant leap in artificial intelligence (AI) by developing Voicebox, an advanced tool capable of generating lifelike speech.

Despite the tool’s potential, the company has chosen not to release it immediately due to concerns about potential misuse.

Voicebox

Announced last Friday, Voicebox can create convincing voice dialogue, opening up a range of possibilities, from enhancing communication across languages to delivering lifelike character dialogue in video games.

Unique in its functionality, Voicebox can generate speech it wasn’t specifically trained for.

All it requires is some text input and a small audio clip, which it then uses to create a whole new speech in the voice of the source audio.

In a breakthrough from traditional AI speech tools, Voicebox learns directly from raw audio and its corresponding transcription, eliminating the need for task-specific training with carefully curated datasets.


Moreover, this impressive tool can produce audio in six languages – English, French, German, Spanish, Polish, and Portuguese – offering a realistic representation of natural human speech.

Potential Misuse and Meta’s Precautionary Approach

While Voicebox opens up exciting possibilities, Meta is fully aware of the potential misuse of such a tool.

The AI tool could be misused to create ‘deepfake’ dialogues, replicating the voices of public figures or celebrities in an unethical manner.

To counter this risk, Meta has developed AI classifiers, akin to spam filters, that can differentiate between human speech and speech generated by ‘Voicebox’.

The company is advocating for transparency in AI development, coupled with a firm commitment to responsible use. As part of this commitment, Meta has no current plans to make ‘Voicebox’ publicly available, emphasizing the need to balance openness with responsibility.

Instead of launching a functional tool, Meta is offering audio samples and a research paper to help researchers understand its potential and work towards responsible use.

Global Concerns Over AI Misuse

The rapid advancements in AI are causing concern among global leaders, including the United Nations (UN).

Deepfakes have been utilized in scams and have propagated hate and misinformation online, as highlighted in a recent UN report.

Creating AI tools like ‘Voicebox’ offers numerous possibilities but underscores the importance of cautious development and responsible use to prevent misuse.

As we continue to stride forward in the field of AI, these concerns will remain paramount.

AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

文章来源于互联网:AI Chatbots Falling Short of EU Law Standards, a Stanford Study Reveals

A recent study conducted by researchers from Stanford University concludes that current large language models (LLMs) such as OpenAI’s GPT-4 and Google’s Bard are failing to meet the compliance standards set by the European Union (EU) Artificial Intelligence (AI) Act.

Understanding the EU AI Act

The EU AI Act, the first of its kind to regulate AI on a national and regional scale, was recently adopted by the European Parliament.

It not only oversees AI within the EU, a region housing 450 million people but also sets the precedent for AI regulations globally.

However, as per the Stanford study, AI companies have a considerable distance to cover to attain compliance.

Compliance Analysis of AI Providers

In their study, the researchers evaluated ten major model providers against the 12 requirements of the AI Act, scoring each provider on a 0 to 4 scale.

Stanford’s report says:

“We present the final scores in the above figure with the justification for every grade made available. Our results demonstrate a striking range in compliance across model providers: some providers score less than 25% (AI21 Labs, Aleph Alpha, Anthropic) and only one provider scores at least 75% (Hugging Face/BigScience) at present. Even for the highest-scoring providers, there is still significant margin for improvement. This confirms that the Act (if enacted, obeyed, and enforced) would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.”

The findings displayed a significant variation in compliance levels, with some providers scoring below 25%, and only Hugging Face/BigScience scoring above 75%.

This suggests a considerable scope for improvement even for high-scoring providers.

The Problem Areas

Do Foundation Model Providers Comply with the Draft EU AI Act? Problem Areas

The researchers highlighted key areas of non-compliance, including a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions, and risk mitigation methodology.

They also observed a clear difference between open and closed model releases, with open releases providing better disclosure of resources but posing bigger challenges in controlling deployment.

The study concludes that all providers, regardless of their release strategy, have room for improvements.

A Reduction in Transparency

In recent times, major model releases have seen a decline in transparency.

OpenAI, for instance, chose not to disclose any data and compute details in their reports for GPT-4, citing competitive landscape and safety implications.

Potential Impact of the EU AI Regulations

The Stanford researchers believe that the enforcement of the EU AI Act could significantly influence the AI industry.

The Act emphasises the need for transparency and accountability, encouraging large foundation model providers to adapt to new standards.

However, the swift adaptation and evolution of business practices to meet regulatory requirements remain a major challenge for AI providers.

Despite this, the researchers suggest that with robust regulatory pressure, providers could achieve higher compliance scores through meaningful yet feasible changes.

The Future of AI Regulation

The study offers an insightful perspective on the future of AI regulation.

The researchers assert that if properly enforced, the AI Act could substantially impact the AI ecosystem, promoting transparency and accountability.

As we stand on the threshold of regulating this transformative technology, the study emphasises the importance of transparency as a fundamental requirement for responsible AI deployment.

OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

文章来源于互联网:OpenAI Needs to Improve ChatGPT’s Reliability: Are Users Aware of Its Limitations?

AI chatbot, ChatGPT, a creation of OpenAI, is under scrutiny due to its frequent inability to distinguish fact from fiction, leaving users often led astray by the information it provides.

The Warning Sign Often Ignored

OpenAI has highlighted on its homepage one of the many limitations of ChatGPT – it may sometimes provide incorrect information.

Although this warning holds true for several information sources, it brings to light a concerning trend. Users often disregard this caveat, assuming the data provided by ChatGPT to be factual.

Unreliable Legal Aid, The Case of Steven A. Schwartz

The misleading nature of ChatGPT came into stark focus when US lawyer Steven A.

Schwartz turned to the chatbot for case references in a lawsuit against Colombian airline Avianca. In a turn of events, all the cases the AI suggested turned out to be non-existent.

Despite Schwartz’s concerns about the veracity of the information, the AI reassured him of its authenticity.

Such instances raise questions about the chatbot’s reliability.

A Misunderstood Reliable Source?

The frequency with which users treat ChatGPT as a credible source of information calls for a wider recognition of its limitations.

Over the past few months, there have been several reports of people being misled by its fallacies, which have been largely inconsequential but nonetheless worrying.

One concerning instance involved a Texas A&M professor who used ChatGPT to verify if students’ essays were AI-generated.

ChatGPT confirmed, incorrectly, that they were, leading to the threat of failing the entire class. This incident underscores the risk of the misinformation that ChatGPT can spread, potentially leading to more serious consequences.

Cases like these do not entirely discredit the potential of ChatGPT and other AI chatbots. In fact, these tools, under the right conditions and with adequate safeguards, could be exceptionally useful.

However, it’s crucial to realize that at present, their capabilities are not entirely reliable.

The Role of the Media and OpenAI

The media and OpenAI bear some responsibility for this issue.

Media often portrays these systems as emotionally intelligent entities, failing to emphasize their unreliability. Similarly, OpenAI could do more to warn users of the potential misinformation that ChatGPT can provide.

Recognizing ChatGPT as a Search Engine

The tendency of users to utilize ChatGPT as a search engine should be acknowledged by OpenAI, leading them to provide clear and upfront warnings.

Chatbots present information in a regenerated text format and a friendly, all-knowing tone, making it easy for users to assume the information is accurate.

This pattern reinforces the need for stronger disclaimers and cautionary measures from OpenAI.

The Path Forward

OpenAI needs to implement changes to reduce the likelihood of users being misled.

This could include programming ChatGPT to caution users to verify its sources when asked for factual citations, or making it clear when it is incapable of making a judgment.

OpenAI has indeed made improvements, making ChatGPT more transparent about its limitations.

However, inconsistencies persist and call for more action to ensure that users are fully aware of the potential for error and misinformation.

Without such measures, a simple disclaimer like “May occasionally generate incorrect information” seems significantly inadequate.

Spotting AI-Written Text Gets Easier with New Research

文章来源于互联网:Spotting AI-Written Text Gets Easier with New Research

Researchers have found a new method to determine whether a piece of text was penned by a human or an artificial intelligence (AI).

This new detection technique leverages a model named RoBERTa, which helps to analyze the structure of text.

Finding the Differences

The study revealed that the text produced by AI systems, such as ChatGPT and Davinci, displays different patterns compared to human text.

When these texts were visualized as points in a multi-dimensional space, it was found that the points representing AI-written text occupied a lesser area than the points representing human-written text.

Using this key difference, researchers designed a tool that can resist common tactics employed to camouflage AI-written text.

The performance of this tool remained impressive even when it was tested with various types of text and AI models, showing high accuracy.

However, its accuracy decreased when the tool was tested with a sophisticated hiding method called DIPPER.

Despite this, it still performed better than other available detectors.

One of the exciting aspects of this tool is its capability to work with languages other than English. The research showed that while the pattern of text points varied across languages, AI-written text consistently occupied a lesser space than human-written text in every specific language.

Looking Ahead

While the researchers acknowledged that the tool faces difficulties when dealing with certain types of AI-generated text, they remain optimistic about potential enhancements in the future.

They also suggested exploring other models, similar to RoBERTa, for understanding the structure of text.

Earlier this year, OpenAI introduced a tool designed to distinguish between human and AI-generated text.

Although this tool provides valuable assistance, it is not flawless and can sometimes misjudge. The developers have made this tool publicly available for free to receive feedback and make necessary improvements.

These developments underscore the ongoing endeavors in the tech world to tackle the challenges posed by AI-generated content. Tools like these are expected to play a crucial role in battling misinformation campaigns and mitigating other harmful effects of AI-generated content.

Polygon Introduces AI Chatbot Assistant, Polygon Copilot

文章来源于互联网:Polygon Introduces AI Chatbot Assistant, Polygon Copilot

Polygon, a well-known developer that provides Ethereum scaling solutions, has leaped into the future of Web3. They have introduced an AI chatbot assistant, Polygon Copilot, to their platform.

What is Polygon Copilot?

Imagine a personal guide that can help you navigate the expansive ecosystem of decentralized applications (dApps) on Polygon.


Polygon Copilot is just that! It’s an AI assistant that can answer your questions and provide information about the Polygon platform.

It comes with three different user levels: Beginner, Advanced, and Degen, each designed for users at different stages of familiarity with the ecosystem.

The assistant is built on OpenAI’s GPT-3.5 and GPT-4 models and is incorporated into the user interface of Polygon.

One of the main goals of the Copilot is to offer insights, analytics, and guidance based on the Polygon protocol documentation.

A standout feature of Polygon Copilot is its commitment to transparency. It discloses the sources of the information it gives, which enables users to verify the information and explore the topic further.

Polygon’s step towards integrating AI technology is part of a growing trend in the Web3 world.

Other companies including Alchemy, Solana Labs, and Etherscan are also harnessing the potential of AI.

Using Polygon Copilot

To start with Polygon Copilot, users need to connect a wallet that will serve as the user account.

This account is given credits for asking questions, with new credits added every 24 hours.

And what sets Polygon Copilot apart? It’s not just any plain-speaking AI; it has a flair of its own. Ask it about the top NFT project on Polygon, and you’ll get a response full of personality.

However, it’s essential to remember that like all AI technology, Polygon Copilot isn’t perfect.

Users are cautioned that the AI may provide inaccurate information and to take the chatbot’s answers with a grain of salt.

Polygon has set limits on the number of responses the chatbot can generate to prevent spamming and overload.

What’s Polygon All About?

Polygon presents itself as ‘Ethereum 2.0’, addressing scalability issues within the Ethereum blockchain.

It enhances the value of any applications built on the Ethereum blockchain.

The introduction of the AI assistant is a leap forward for the platform. Whether you are a beginner looking for basic guidance or an advanced user trying to build complex products, Polygon Copilot is there to assist.

It’s also handy for analysts seeking accurate data about NFTs and dApps.

Web3 and the Promise of Data Ownership

Polygon’s use of AI reflects the evolution of the internet, known as Web 3.0. This version of the internet promises safety, transparency, and control over the data created by users.

Web 3.0 operates on blockchain technology, a decentralized system that removes corporate access to private data.

Blockchains were born alongside Bitcoin, the first cryptocurrency, aiming to break free from corporations’ control over our data.

In the spirit of Web 3.0, platforms like Polygon allow users to control access to their data and attach value to it, enhancing data ownership.

As the tech world moves forward, innovations like Polygon Copilot highlight the growing intersection between artificial intelligence and blockchain technology, redefining user experience in the process.