Eblogtip.com
  • Categories
    • News
    • Technology
    • Domains
    • Hosting
    • Promotions

Archives

  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • December 2022

Categories

  • News
  • Technology
  • Uncategorized
eBlogTip
  • Categories
    • News
    • Technology
    • Domains
    • Hosting
    • Promotions
  • Technology

We should all be worried about AI infiltrating crowdsourced work

  • June 18, 2023
Total
0
Shares
0
0
0

A new paper from researchers at Swiss university EPFL suggests that between 33% and 46% of distributed crowd workers on Amazon’s Mechanical Turk service appear to have “cheated” when performing a particular task assigned to them, as they used tools such as ChatGPT to do some of the work. If that practice is widespread, it may turn out to be a pretty serious issue.

Amazon’s Mechanical Turk has long been a refuge for frustrated developers who want to get work done by humans. In a nutshell, it’s an application programming interface (API) that feeds tasks to humans, who do them and then return the results. These tasks are usually the kind that you wish computers would be better at. Per Amazon, an example of such tasks would be: “Drawing bounding boxes to build high-quality datasets for computer vision models, where the task might be too ambiguous for a purely mechanical solution and too vast for even a large team of human experts.”

Data scientists treat datasets differently according to their origin — if they’re generated by people or a large language model (LLM). However, the problem here with Mechanical Turk is worse than it sounds: AI is now available cheaply enough that product managers who choose to use Mechanical Turk over a machine-generated solution are relying on humans being better at something than robots. Poisoning that well of data could have serious repercussions.

“Distinguishing LLMs from human-generated text is difficult for both machine learning models and humans alike,” the researchers said. The researchers therefore created a methodology for figuring out whether text-based content was created by a human or a machine.

The test involved asking crowdsourced workers to condense research abstracts from the New England Journal of Medicine into 100-word summaries. It is worth noting that this is precisely the kind of task that generative AI technologies such as ChatGPT are good at.


Source link

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Previous Article
  • Technology

How You.com plans to combat Google’s search dominance

  • June 18, 2023
View Post
Next Article
  • News

7 awesome Reddit alternatives you should try right now

  • June 18, 2023
View Post
You May Also Like
View Post
  • Technology

Tinder goes ultra-premium, Amazon invests in Anthropic and Apple explains its new AirPods

  • September 30, 2023
View Post
  • Technology

How much can artists make from generative AI? Vendors won’t say

  • September 30, 2023
View Post
  • Technology

Venture capital is opening the gates for defense tech

  • September 30, 2023
View Post
  • Technology

Humane’s ‘AI Pin’ debuts on the Paris runway

  • September 30, 2023
View Post
  • Technology

Kick streamers consider leaving over CEO’s comments in a sex worker “prank” stream

  • September 30, 2023
View Post
  • Technology

VW bails on its plan for a $2.1B EV plant in Germany

  • September 29, 2023
View Post
  • Technology

When predatory investors damage your chances of success

  • September 29, 2023
View Post
  • Technology

Pudgy Penguins’ approach may be the answer to fixing NFTs’ revenue problems

  • September 29, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

eBlogTip.com
  • Categories

Input your search keywords and press Enter.