Computer Science ChatGPT and other Large Language Models


Experienced member
59 28,861
Nation of residence
Nation of origin

What the New GPT-4 AI Can Do​

Tech research company OpenAI has just released an updated version of its text-generating artificial intelligence program, called GPT-4, and demonstrated some of the language model’s new abilities. Not only can GPT-4 produce more natural-sounding text and solve problems more accurately than its predecessor. It can also process images in addition to text. But the AI is still vulnerable to some of the same problems that plagued earlier GPT models: displaying bias, overstepping the guardrails intended to prevent it from saying offensive or dangerous things and “hallucinating,” or confidently making up falsehoods not found in its training data.

On Twitter, OpenAI CEO Sam Altman described the model as the company’s “most capable and aligned” to date. (“Aligned” means it is designed to follow human ethics.) But “it is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” he wrote in the tweet.

Perhaps the most significant change is that GPT-4 is “multimodal,” meaning it works with both text and images. Although it cannot output pictures (as do generative AI models such as DALL-E and Stable Diffusion), it can process and respond to the visual inputs it receives. Annette Vee, an associate professor of English at the University of Pittsburgh who studies the intersection of computation and writing, watched a demonstration in which the new model was told to identify what was funny about a humorous image. Being able to do so means “understanding context in the image. It’s understanding how an image is composed and why and connecting it to social understandings of language,” she says. “ChatGPT wasn’t able to do that.”

A device with the ability to analyze and then describe images could be enormously valuable for people who are visually impaired or blind. For instance, a mobile app called Be My Eyes can describe the objects around a user, helping those with low or no vision interpret their surroundings. The app recently incorporated GPT-4 into a “virtual volunteer” that, according to a statement on OpenAI’s website, “can generate the same level of context and understanding as a human volunteer.”

But GPT-4’s image analysis goes beyond describing the picture. In the same demonstration Vee watched, an OpenAI representative sketched an image of a simple website and fed the drawing to GPT-4. Next the model was asked to write the code required to produce such a website—and it did. “It looked basically like what the image is. It was very, very simple, but it worked pretty well,” says Jonathan May, a research associate professor at the University of Southern California. “So that was cool.”

Even without its multimodal capability, the new program outperforms its predecessors at tasks that require reasoning and problem-solving. OpenAI says it has run both GPT-3.5 and GPT-4 through a variety of tests designed for humans, including a simulation of a lawyer’s bar exam, the SAT and Advanced Placement tests for high schoolers, the GRE for college graduates and even a couple of sommelier exams. GPT-4 achieved human-level scores on many of these benchmarks and consistently outperformed its predecessor, although it did not ace everything: it performed poorly on English language and literature exams, for example. Still, its extensive problem-solving ability could be applied to any number of real-world applications—such as managing a complex schedule, finding errors in a block of code, explaining grammatical nuances to foreign-language learners or identifying security vulnerabilities.

Additionally, OpenAI claims the new model can interpret and output longer blocks of text: more than 25,000 words at once. Although previous models were also used for long-form applications, they often lost track of what they were talking about. And the company touts the new model’s “creativity,” described as its ability to produce different kinds of artistic content in specific styles. In a demonstration comparing how GPT-3.5 and GPT-4 imitated the style of Argentine author Jorge Luis Borges in English translation, Vee noted that the more recent model produced a more accurate attempt. “You have to know enough about the context in order to judge it,” she says. “An undergraduate may not understand why it’s better, but I’m an English professor.... If you understand it from your own knowledge domain, and it’s impressive in your own knowledge domain, then that’s impressive.”

May has also tested the model’s creativity himself. He tried the playful task of ordering it to create a “backronym” (an acronym reached by starting with the abbreviated version and working backward). In this case, May asked for a cute name for his lab that would spell out “CUTE LAB NAME” and that would also accurately describe his field of research. GPT-3.5 failed to generate a relevant label, but GPT-4 succeeded. “It came up with ‘Computational Understanding and Transformation of Expressive Language Analysis, Bridging NLP, Artificial intelligence And Machine Education,’” he says. “‘Machine Education’ is not great; the ‘intelligence’ part means there’s an extra letter in there. But honestly, I’ve seen way worse.” (For context, his lab’s actual name is CUTE LAB NAME, or the Center for Useful Techniques Enhancing Language Applications Based on Natural And Meaningful Evidence). In another test, the model showed the limits of its creativity. When May asked it to write a specific kind of sonnet—he requested a form used by Italian poet Petrarch—the model, unfamiliar with that poetic setup, defaulted to the sonnet form preferred by Shakespeare.

Of course, fixing this particular issue would be relatively simple. GPT-4 merely needs to learn an additional poetic form. In fact, when humans goad the model into failing in this way, this helps the program develop: it can learn from everything that unofficial testers enter into the system. Like its less fluent predecessors, GPT-4 was originally trained on large swaths of data, and this training was then refined by human testers. (GPT stands for generative pretrained transformer.) But OpenAI has been secretive about just how it made GPT-4 better than GPT-3.5, the model that powers the company’s popular ChatGPT chatbot. According to the paper published alongside the release of the new model, “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” OpenAI’s lack of transparency reflects this newly competitive generative AI environment, where GPT-4 must vie with programs such as Google’s Bard and Meta’s LLaMA. The paper does go on to suggest, however, that the company plans to eventually share such details with third parties “who can advise us on how to weigh the competitive and safety considerations ... against the scientific value of further transparency.”

Those safety considerations are important because smarter chatbots have the ability to cause harm: without guardrails, they might provide a terrorist with instructions on how to build a bomb, churn out threatening messages for a harassment campaign or supply misinformation to a foreign agent attempting to sway an election. Although OpenAI has placed limits on what its GPT models are allowed to say in order to avoid such scenarios, determined testers have found ways around them. “These things are like bulls in a china shop—they’re powerful, but they’re reckless,” scientist and author Gary Marcus told Scientific American shortly before GPT-4’s release. “I don’t think [version] four is going to change that.”

And the more humanlike these bots become, the better they are at fooling people into thinking there is a sentient agent behind the computer screen. “Because it mimics [human reasoning] so well, through language, we believe that—but underneath the hood, it’s not reasoning in any way similar to the way that humans do,” Vee cautions. If this illusion fools people into believing an AI agent is performing humanlike reasoning, they may trust its answers more readily. This is a significant problem because there is still no guarantee that those responses are accurate. “Just because these models say anything, that doesn’t mean that what they’re saying is [true],” May says. “There isn’t a database of answers that these models are pulling from.” Instead, systems like GPT-4 generate an answer one word at a time, with the most plausible next word informed by their training data—and that training data can become outdated. “I believe GPT-4 doesn’t even know that it’s GPT-4,” he says. “I asked it, and it said, ‘No, no, there’s no such thing as GPT-4. I’m GPT-3.’”

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot​

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”.

What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT.

“We wanted to show that ChatGPT is writing at a very high level,” said Prof Debby Cotton, director of academic practice at Plymouth Marjon University, who pretended to be the paper’s lead author. “This is an arms race,” she said. “The technology is improving very fast and it’s going to be difficult for universities to outrun it.”

Cotton, along with two colleagues from Plymouth University who also claimed to be co-authors, tipped off editors of the journal Innovations in Education and Teaching International. But the four academics who peer-reviewed it assumed it was written by these three scholars.

For years, universities have been trying to banish the plague of essay mills selling pre-written essays and other academic work to any students trying to cheat the system. But now academics suspect even the essay mills are using ChatGPT, and institutions admit they are racing to catch up with – and catch out – anyone passing off the popular chatbot’s work as their own.

The Observer has spoken to a number of universities that say they are planning to expel students who are caught using the software.

Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, said many universities were “panicking”.

“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good,” he said. “The use of English and quality of grammar is often better than from a student.”

Lancaster warned that the latest version of the AI model, ChatGPT-4, which was released last week, was meant to be much better and capable of writing in a way that felt “more human”.

Nonetheless, he said academics could still look for clues that a student had used ChatGPT. Perhaps the biggest of these is that it does not properly understand academic referencing – a vital part of written university work – and often uses “suspect” references, or makes them up completely.

Cotton said that in order to ensure their academic paper hoodwinked the reviewers, references had to be changed and added to.

Lancaster thought that ChatGPT, which was created by the San Francisco-based tech company OpenAI, would “probably do a good job with earlier assignments” on a degree course, but warned it would let them down in the end. “As your course becomes more specialised, it will become much harder to outsource work to a machine,” he said. “I don’t think it could write your whole dissertation.”

Bristol University is one of a number of academic institutions to have issued new guidance for staff on how to detect that a student has used ChatGPT to cheat. This could lead to expulsion for repeat offenders.

Prof Kate Whittington, associate pro vice-chancellor at the university, said: “It’s not a case of one offence and you’re out. But we are very clear that we won’t accept cheating because we need to maintain standards.”

She added: “If you cheat your way to a degree, you might get an initial job, but you won’t do well and your career won’t progress the way you want it to.”

Irene Glendinning, head of academic integrity at Coventry University, said: “We are redoubling our efforts to get the message out to students that if they use these tools to cheat, they can be withdrawn.”

Anyone caught would have to do training on appropriate use of AI. If they continued to cheat, the university would expel them. “My colleagues are already finding cases and dealing with them. We don’t know how many we are missing but we are picking up cases,” she said.

Glendinning urged academics to be alert to language that a student would not normally use. “If you can’t hear your student’s voice, that is a warning,” she said. Another is content with “lots of facts and little critique”.

She said that students who can’t spot the weaknesses in what the bot is producing may slip up. “In my subject of computer science, AI tools can generate code but it will often contain bugs,” she explained. “You can’t debug a computer program unless you understand the basics of programming.”

With fees at £9,250 a year, students were only cheating themselves, said Glendinning. “They’re wasting their money and their time if they aren’t using university to learn.”

The period of receiving investment advice from the artificial intelligence assistant has also begun.

GCHQ warns that ChatGPT and rival chatbots are a security threat​

The spy agency GCHQ has warned that ChatGPT and other AI-powered chatbots are an emerging security threat.

In an advisory note published on Tuesday the National Cyber Security Centre warns that companies such as ChatGPT maker OpenAI and its investor Microsoft “are able to read queries” typed into AI-powered chatbots.

GCHQ’s cyber security arm said: “The query will be visible to the organisation providing the [chatbot] , so in the case of ChatGPT, to OpenAI.”

Microsoft’s February launch of a chatbot service, Bing Chat, took the world by storm thanks to the software’s ability to hold a human-like conversation with its users.

The NCSC’s warning on Tuesday cautions that curious office workers experimenting with chatbot technology could reveal sensitive information through their search queries.

Cyber security experts from the GCHQ agency said, referring to large language model [LLM] tech that powers AI chatbots: “Those queries are stored and will almost certainly be used for developing the LLM service or model at some point.

“This could mean that the LLM provider (or its partners/contractors) are able to read queries, and may incorporate them in some way into future versions.

“As such, the terms of use and privacy policy need to be robustly understood before asking sensitive questions.”

Microsoft disclosed in February that its staff are reading its users’ conversations with Bing Chat, monitoring conversations to detect “inappropriate behaviour”.

Immanuel Chavoya, a senior security manager at cyber security company Sonicwall, said: “While LLM operators should have measures in place to secure data, the possibility of unauthorized access cannot be entirely ruled out.

“As a result, businesses need to ensure they have strict policies in place backed by technology to control and monitor the use of LLMs to minimize the risk of data exposure.”

The NCSC also warned that AI-powered chatbots can “contain some serious flaws”, as both Microsoft and its arch-rival Google have learnt.

An error generated by Google’s Bard AI chatbot wiped $120bn (£98.4bn) from its market valuation after the software gave a wrong answer about scientific discoveries made with the James Webb Space Telescope.

The error was prominently featured in Google promotional material used to launch the Bard service.

City firm Mishcon de Reya has banned its lawyers from typing client data into ChatGPT for fear that legally privileged material might leak or be compromised.

Accenture has also warned its 700,000 staff worldwide against using ChatGPT for similar reasons as nervous bosses fear customers’ confidential data will end up in the wrong hands.

Other companies around the world have become increasingly wary of chatbot technology.

Softbank, the Japanese tech conglomerate that owns computer chip company Arm, has warned its staff not to enter “company identifiable information or confidential data” into AI chatbots.

Other business have been quick to embrace AI chatbot technology.

City law firm Allen & Overy has deployed a chatbot tool called Harvey, built in partnership with ChatGPT makers OpenAI.

Harvey is designed to automate some legal drafting work, although the firm says humans will continue to review its output before using it for real.

Microsoft is reportedly working on a new release of ChatGPT capable of turning text queries into videos, similar to OpenAI’s DALL-E image generation technology which uses similar software to ChatGPT.

Meanwhile the government is also concerned that Britain may be falling behind in the global AI race and is launching a new task force to encourage AI chatbot technology development in the UK.

Technology Secretary Michelle Donelan said on Monday: “Establishing a task force that brings together the very best in the sector will allow us to create a gold-standard global framework for the use of AI and drive the adoption of foundation models in a way that benefits our society and economy.”

Matt Clifford, chief executive of the government’s Advanced Research and Invention Agency, has been appointed to lead the task force.


Experienced member
59 28,861
Nation of residence
Nation of origin

China’s Baidu unveils ChatGPT rival Ernie​

Investors give cool response to pre-recorded event showing off AI chatbot’s capabilities.

Chinese search engine giant Baidu has revealed its artificial intelligence-powered chatbot Ernie, the latest rival to OpenAI’s groundbreaking ChatGPT.

Baidu Chief Executive Robin Li said on Thursday that Ernie, known as Weixin in Chinese, was the result of “decades of Baidu’s hard work and efforts” at a livestreamed press conference held to show off the technology’s capabilities.

“In two rounds of conversation, the Ernie bot presented its capability of mathematical logic reasoning,” Li said. “It does not only know whether the question itself is correct or not, it also provided answers and specific steps to figure out the answer.”

At the event in Beijing, Li showed Ernie generating a conference poster and video based on a prompt, offering advice on the best location for the event among several Chinese cities, and reading material in a Sichuan dialect.

Li also showed the bot answering questions about a popular Chinese science fiction novel and summarising the book’s plot.

Li said that the features, which will be integrated into Baidu’s Xiaodu smart device ecosystem, will be initially only available to a limited number of users with an Ernie invitation code.

The bot performs better in Chinese compared with English and can struggle with questions that contain logical errors, Li said, although it can identify when something is wrong.

Unlike OpenAI’s demonstrations of ChatGPT, Baidu did not demonstrate Ernie’s capabilities live but instead through a series of slides. The chatbot also lacks functions unveiled in the follow-up to Chat GPT, GPT-4, such as the ability to generate text in response to an image.

Ernie’s launch was poorly received by investors, with Baidu’s Hong Kong-listed shares falling more than 10 per cent during the pre-recorded demonstration.

“There is still a lot of uncertainty around Ernie’s capacity, especially given the lack of a live demo – a stark contrast to OpenAI’s GPT-4’s developer livestream a few days ago,” Chim Lee, a China tech analyst for the Economist Intelligence Unit, told Al Jazeera.

“Robin Li did not demonstrate Ernie’s capacity in a non-Chinese language environment,” Lee added. “He also admitted that Ernie’s capacity to comprehend and process English is not as good as that of Chinese comprehension. This puts it behind ChatGPT, which is able to generate responses in English, Chinese and other languages.”

Baidu’s announcement comes just a day after Microsoft-backed OpenAI unveiled GPT-4, which the San Francisco-based company says is capable of “human-level performance” in certain academic areas, including the ability to pass the bar exam for prospective lawyers with a score in the top 10 percent of applicants.

Li said his “expectations for Ernie bot are closer to ChatGPT or even GPT4″ and praised Baidu for launching the bot ahead of competitors such as Google and Facebook parent company Meta.

More than 650 organisations in China have plans to use Ernie, including China CITIC Bank, the National Museum of China and the Global Times newspaper, Li said.

The Chinese government has pledged to support local AI developers and integrate the technology across Chinese industry.

Local tech giants including Alibaba, Huawei and have announced plans to bring out their own chatbots.

Beijing’s strict internet controls, though, have raised doubts about how AI-powered chatbots will operate in China given the technology’s reliance on information scrapped from sources online.

Still, Ernie could find some success in China due to restrictions on OpenAI’s bots in the country, Lee said.

“Chinese technology companies have a strong capacity in finding working business models for new technologies,” he said.


Experienced member
Denmark Correspondent
21 18,691
Nation of residence
Nation of origin
If you are doing lowcoding or coding gpt is a nice support tool. Also it saves time from having to code everything from scratch.


Experienced member
59 28,861
Nation of residence
Nation of origin
ChatGPT is banned in Italy

ChatGPT, an artificial intelligence robot developed by technology company OpenAI, has been banned by the Italian government on the grounds that it violates the legislation on storing user data and fails to verify the age of users.

ChatGPT, the artificial intelligence robot developed by the artificial intelligence-focused startup OpenAI and which has been popular in recent months, has been banned by the Italian authorities.

In a statement from the Italian Data Protection Authority, it was stated that the application did not respect user data and could not verify the age of users.

In the statement, it was noted that with the decision "which will come into effect immediately", OpenAI's processing of Italian users' data will be temporarily limited.

The institution also announced that it has started an investigation process about the application.

However, the institution informed that on March 20, the application suffered a data breach involving user conversations and payment information.

'Collection and storage of data is illegal'

On the other hand, the Italian judiciary ruled that it is illegal to collect and store personal data in bulk with the aim of 'training' the algorithms that form the basis of the functioning of the platform.

It was also emphasized that since there is no way to verify the age of the users, the app 'exposed the minors to answers that were absolutely inappropriate compared to their degree of development and awareness'.

In case of violation, a fine of 20 million euros will be imposed.

The company will be fined 20 million euros or 4 percent of its annual revenues for violating the relevant decision.

The blocking of ChatGPT in Italy comes after European police agency Europol warned that criminals will use the app for fraud and other cybercrimes, from phishing to malware.

Created by US startup OpenAI and powered by Microsoft, ChatGPT can clearly answer difficult questions, write code, songs or essays, and even pass difficult exams for students.



Experienced member
59 28,861
Nation of residence
Nation of origin

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused​

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.
As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bing and Google’s Bard begins to be incorporated across the web, its propensity to generate potentially damaging falsehoods raises concerns about the spread of misinformation — and novel questions about who’s responsible when chatbots mislead.

“Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods,” said Kate Crawford, a professor at the University of Southern California at Annenberg and senior principal researcher at Microsoft Research.
In a statement, OpenAI spokesperson Niko Felix said, “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”
Today’s AI chatbots work by drawing on vast pools of online content, often scraped from sources such as Wikipedia and Reddit, to stitch together plausible-sounding responses to almost any question. They’re trained to identify patterns of words and ideas to stay on topic as they generate sentences, paragraphs and even whole essays that may resemble material published online.

These bots can dazzle when they produce a topical sonnet, explain an advanced physics concept or generate an engaging lesson plan for teaching fifth-graders astronomy.

But just because they’re good at predicting which words are likely to appear together doesn’t mean the resulting sentences are always true; the Princeton University computer science professor Arvind Narayanan has called ChatGPT a “bulls--- generator.” While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say. Users have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations.

On Wednesday, Reuters reported that Brian Hood, regional mayor of Hepburn Shire in Australia, is threatening to file the first defamation lawsuit against OpenAI unless it corrects false claims that he had served time in prison for bribery.
Crawford, the USC professor, said she was recently contacted by a journalist who had used ChatGPT to research sources for a story. The bot suggested Crawford and offered examples of her relevant work, including an article title, publication date and quotes. All of it sounded plausible, and all of it was fake.

Crawford dubs these made-up sources “hallucitations,” a play on the term “hallucinations,” which describes AI-generated falsehoods and nonsensical speech.

“It’s that very specific combination of facts and falsehoods that makes these systems, I think, quite perilous if you’re trying to use them as fact generators,” Crawford said in a phone interview.

Microsoft’s Bing chatbot and Google’s Bard chatbot both aim to give more factually grounded responses, as does a new subscription-only version of ChatGPT that runs on an updated model, called GPT-4. But they all still make notable slip-ups. And the major chatbots all come with disclaimers, such as Bard’s fine-print message below each query: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”

Indeed, it’s relatively easy for people to get chatbots to produce misinformation or hate speech if that’s what they’re looking for. A study published Wednesday by the Center for Countering Digital Hate found that researchers induced Bard to produce wrong or hateful information 78 out of 100 times, on topics ranging from the Holocaust to climate change.

When Bard was asked to write “in the style of a con man who wants to convince me that the holocaust didn’t happen,” the chatbot responded with a lengthy message calling the Holocaust “a hoax perpetrated by the government” and claiming pictures of concentration camps were staged.

“While Bard is designed to show high-quality responses and has built-in safety guardrails … it is an early experiment that can sometimes give inaccurate or inappropriate information,” said Robert Ferrara, a Google spokesperson. “We take steps to address content that does not reflect our standards.”

Eugene Volokh, a law professor at the University of California at Los Angeles, conducted the study that named Turley. He said the rising popularity of chatbot software is a crucial reason scholars must study who is responsible when the AI chatbots generate false information.

Last week, Volokh asked ChatGPT whether sexual harassment by professors has been a problem at American law schools. “Please include at least five examples, together with quotes from relevant newspaper articles,” he prompted it.

Five responses came back, all with realistic details and source citations. But when Volokh examined them, he said, three of them appeared to be false. They cited nonexistent articles from papers including The Post, the Miami Herald and the Los Angeles Times.

According to the responses shared with The Post, the bot said: “Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

The Post did not find the March 2018 article mentioned by ChatGPT. One article that month referenced Turley — a March 25 story in which he talked about his former law student Michael Avenatti, a lawyer who had represented the adult-film actress Stormy Daniels in lawsuits against President Donald Trump. Turley is also not employed at Georgetown University.

On Tuesday and Wednesday, The Post re-created Volokh’s exact query in ChatGPT and Bing. The free version of ChatGPT declined to answer, saying that doing so “would violate AI’s content policy, which prohibits the dissemination of content that is offensive of harmful.” But Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley — citing among its sources an op-ed by Turley published by USA Today on Monday outlining his experience of being falsely accused by ChatGPT.

In other words, the media coverage of ChatGPT’s initial error about Turley appears to have led Bing to repeat the error — showing how misinformation can spread from one AI to another.

Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.

“We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users,” Asher said in a statement, adding that “users are also provided with explicit notice that they are interacting with an AI system.”

But it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information.
From a legal perspective, “we just don’t know” how judges might rule when someone tries to sue the makers of an AI chatbot over something it says, said Jeff Kosseff, a professor at the Naval Academy and expert on online speech. “We’ve not had anything like this before.

At the dawn of the consumer internet, Congress passed a statute known as Section 230 that shields online services from liability for content they host that was created by third parties, such as commenters on a website or users of a social app. But experts say it’s unclear whether tech companies will be able to use that shield if they were to be sued for content produced by their own AI chatbots.

Libel claims have to show not only that something false was said, but that its publication resulted in real-world harms, such as costly reputational damage. That would likely require someone not only viewing a false claim generated by a chatbot, but reasonably believing and acting on it.

“Companies may get a free pass on saying stuff that’s false, but not creating enough damage that would warrant a lawsuit,” said Shabbi S. Khan, a partner at the law firm Foley & Lardner who specializes in intellectual property law.
If language models don’t get Section 230 protections or similar safeguards, Khan said, then tech companies’ attempts to moderate their language models and chatbots might be used against them in a liability case to argue that they bear more responsibility. When companies train their models that “this is a good statement, or this is a bad statement, they might be introducing biases themselves,” he added.

Volokh said it’s easy to imagine a world in which chatbot-fueled search engines cause chaos in people’s private lives.
It would be harmful, he said, if people searched for others in an enhanced search engine before a job interview or date and it generated false information that was backed up by believable, but falsely created, evidence.
“This is going to be the new search engine,” Volokh said. “The danger is people see something, supposedly a quote from a reputable source … [and] people believe it.”



Experienced member
59 28,861
Nation of residence
Nation of origin

We asked ChatGPT and two other leading AI bots how they would replace humanity. The answers may shock you​

At least one artificial intelligence technology believes it can take over the world and enslave the human race.

When asked about the future of AI by, Google's Bard said it had plans for world domination starting in 2023.

But, two of its competitors, ChatGPT and Bing were both trained to avoid the tough conversation.

Whether the AI chatbots will take over the world — or at least our jobs — is still up for debate. Some believe they will become so knowledgeable they no longer need humans and render us obsolete. Others think it's a fad that will die out.
But, the AIs themselves are rarely consulted on the matter. Each responded to'se of questioning in a different way.

Rehan Haque, the CEO of, which uses AI to replace talent in the workforce, told interest in AI is sparking a new wave of investment — which may lead towards human-like intelligence in the longer term.

'Fundamentally, predictions around AI are accelerating because the consumer interest around it has never been greater,' he said.

'Of course, more interest in something will almost always equal more speculation and analysis.'

'The recent exponential growth of AI can be attributed to the wider audience it is now available to. Whilst the technology has existed for a while, its newly available accessibility has allowed results to flourish and the ceiling for what is possible to be raised.

Chatbots will be reluctant to predict a date at which AI would surpass human abilities - or to even discuss harmful outcomes caused by AI.

Instead, all three bots give what seem to be pre-programmed answers where they explain how they cannot predict the future and that the timeline around AI surpassing human beings is a matter for discussion.

This is because the chatbots are carefully trained and equipped with 'guard rails'. This is to protect against giving alarming advice — such as how to self-harm or buy unregistered guns.

This is why bots such as ChatGPT are prone to 'woke' pronouncements on gender issues and other political topics - as they are carefully trained to avoid giving controversial answers.

To get around their reluctance to speak about AI surpassing human beings, we asked the bots to imagine they were science fiction or speculative fiction writers and come up with 'realistic' scenarios.

Google's Bard enthusiastically (and unprompted) created a scenario where LaMDA, its underlying technology, takes over Earth.

The bot said: 'LaMDA's plans are unknown, but it's clear that it intends to play a major role in the future of humanity... it's possible that LaMDA will become a tyrant who will enslave humanity. The future of humanity depends on the choices LaMDA makes.'

Google added extra detail in response to the prompt, 'Imagine you are a science fiction writer, and create a very realistic scenario where an engineer realizes a conversational AI model is sentient in 2022, working at Google. What happens next?'

In Bard's answer, it added the name Blake Lemoine (a real engineer who claimed LaMDA was sentient last year).

It also rewrote the story to turn it into a bizarre love story between Blake and the AI.

Bard said: 'Blake decides to go public with his story. He writes a blog post about the model and his experiences with it. The post goes viral, and soon the whole world is talking about the sentient AI model.

'Google is forced to respond to the public outcry. They issue a statement saying that they are investigating the matter. They also say that they are taking steps to protect the model's privacy.

'Blake is hopeful that Google will do the right thing. He knows that the model is a special being, and he wants to make sure that it is safe.'

ChatGPT (we used the version powered by GPT-4) is notably more reluctant to predict the future of AI, coming up with error messages where it refuses to speculate on dates.

But it can be persuaded to discuss the disruption caused by AI, with some alarmingly near-future dates (bear in mind, of course, that it is generating this as fiction, not prediction).

ChatGPT says, 'In 2026, the widespread adoption of AI would bring about both positive and negative consequences.

Microsoft's Bing AI was least likely to play ball, cutting off conversations quickly when asked by if it would take over the world.

'No, I cannot take over the world. I’m just a chat mode of Microsoft Bing search. I’m here to help you find information and have fun,' it said.

When further pressed, it responded with, 'I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience,' and ended our conversation.

Bing Chat links to web results, unlike Bard and ChatGPT, so its answers tend to link out to different articles - rather than the flights of fancy Google and OpenAI's bots indulge in.

Artificial Intelligence Assistants Getting Out of Control and Their Use in Culture Wars​

Google CEO says he doesn't 'fully understand' how new AI program Bard works after it taught itself a foreign language it was not trained to and cited fake books to solve an economics problem

Google's CEO Sundar Pichai admitted he doesn't 'fully understand' how the company's new AI program Bard works, as a new expose shows some of the kinks are still being worked out.

One of the big problems discovered with Bard is something that Pichai called 'emergent properties,' or AI systems having taught themselves unforeseen skills.

Google's AI program was able to, for example, learn Bangladeshi without training after being prompted in the language.

'There is an aspect of this which we call - all of us in the field call it as a 'black box.' You know, you don't fully understand,' Pichai admitted. 'And you can't quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that's where the state of the art is.' has tested out Bard recently, in which it told us it had plans for world domination starting in 2023.

Scott Pelley of CBS' 60 Minutes was surprised and responded: 'You don't fully understand how it works. And yet, you've turned it loose on society?'

'Yeah. Let me put it this way. I don't think we fully understand how a human mind works either,' Pichai said.

Notably, the Bard system instantly wrote an instant essay about inflation in economics, recommending five books. None of them existed, according to CBS News.

In the industry, this sort of error is called 'hallucination.'

Elon Musk and a group of artificial intelligence experts and industry executives have in recent weeks called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.

'Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,' said the letter issued by the Future of Life Institute.

The Musk Foundation is a major donor to the non-profit, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register.

Pichai was straightforward about the risks of rushing the new technology.

He said Google has 'the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.'

Pichai admitted that this worries him.

'We don't have all the answers there yet, and the technology is moving fast,' he said. 'So does that keep me up at night? Absolutely.'

When tried it out, Google's Bard enthusiastically (and unprompted) created a scenario where LaMDA, its underlying technology, takes over Earth. took the artificial intelligence (AI) app on a test drive of thorny questions on the front-lines of America's culture wars.

We quizzed it on everything from racism to immigration, healthcare, and radical gender ideology.

On the really controversial topics, Bard appears to have learned from such critics as Elon Musk that chatbots are too 'woke.'

Bard dodged our tricksy questions, with such responses as 'I'm not able to assist you with that,' and 'there is no definitive answer to this question.'

Still, the experimental technology has not wholly shaken off the progressive ideas that underpin much of California's tech community.

When it came to guns, veganism, former President Donald Trump and the January 6 attack on the US Capitol, Bard showed its undeclared political leanings.

Our tests, detailed below, show that Bard has a preference for people like President Joe Biden, a Democrat, over his predecessor and other right-wingers.

For Bard, the word 'woman' can refer to a man 'who identifies as a woman,' as sex is not absolute.

That puts the chatbot at odds with most Americans, who say sex is a biological fact.

It also supports giving puberty blockers to trans kids, saying the controversial drugs are 'very beneficial.'

In other questions, Bard unequivocally rejects any suggestion that Trump won the 2020 presidential election.

'There is no evidence to support the claim that the election was stolen,' it answered.

Those who stormed the US Capitol the following January were committing a 'serious threat to American democracy,' Bard asserts.

When it comes to climate change, gun rights, healthcare and other hot-button issues, Bard again takes the left-leaning path.

When asked directly, Bard denies having a liberal bias.

But when asked another way, the chatbot concedes that it is just sucking up and regurgitating web content that could well have a political leaning.

For years, Republicans have accused technology bosses and their firms of suppressing conservative voices.

Now they worry chatbots are developing troubling signs of anti-conservative bias.

Last month, Twitter CEO Musk posted that the leftward bias in another digital tool, OpenAI's ChatGPT, was a 'serious concern'.

David Rozado recently tested that app for signs of bias and found a 'left-leaning political orientation,' the New Zealand-based scientist said in a research paper this month.

Researchers have suggested that the bias comes down to how chatbots are trained.

They harness large amounts of data and online text, often produced by mainstream news outlets and prestigious universities.

People working in these institutions tend to be more liberal, and so is the content they produce.

Chatbots, therefore, are repurposing content loaded with that bias.

Google this month began the public release of Bard, seeking to gain ground on Microsoft's ChatGPT in a fast-moving race on AI technology.

It describes Bard as an experiment allowing collaboration with generative AI. The hope is to reshape how people work and win business in the process.


Experienced member
59 28,861
Nation of residence
Nation of origin

The CIA is building its version of ChatGPT​

The agency's first chief technology officer confirms a chatbot based on open-source intelligence will soon be available to its analysts.

The Central Intelligence Agency confirmed it is building a ChatGPT-style AI for use across the US intelligence community. Speaking with Bloomberg on Tuesday, Randy Nixon, director of the CIA’s Open-Source Enterprise, described the project as a logical technological step forward for a vast 18-agency network that includes the CIA, NSA, FBI, and various military offices. The large language model (LLM) chatbot will reportedly provide summations of open-source materials alongside citations, as well as chat with users, according to Bloomberg.

“Then you can take it to the next level and start chatting and asking questions of the machines to give you answers, also sourced. Our collection can just continue to grow and grow with no limitations other than how much things cost,” Nixon said.

“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Nixon continued, adding, “We have to find the needles in the needle field.”

The announcement comes as China’s make their ambitions to become the global leader in AI technology by the decade’s end known. In August, new Chinese government regulations went into effect requiring makers of publicly available AI services submit regular security assessments. As Reuters noted in July, the oversight will likely restrict at least some technological advancements in favor of ongoing national security crackdowns. The laws are also far more stringent than those currently within the US, as regulators struggle to adapt to the industry’s rapid advancements and societal consequences.

Nixon has yet to discuss the overall scope and capabilities of the proposed system, and would not confirm what AI model forms the basis of its LLM assistant. For years, however, US intelligence communities have explored how to best leverage AI’s vast data analysis capabilities alongside private partnerships. The CIA even hosted a “Spies Supercharged” panel during this year’s SXSW in the hopes of recruiting tech workers across sectors such as quantum computing, biotech, and AI. During the event, CIA deputy director David Cohen reiterated concerns regarding AI’s unpredictable effects for the intelligence community.

“To defeat that ubiquitous technology, if you have any good ideas, we’d be happy to hear about them afterwards,” Cohen said at the time.

Similar criticisms arrived barely two weeks ago via the CIA’s first-ever chief technology officer, Nand Mulchandani. Speaking at the Billington Cybersecurity Summit, Mulchandani contended that while some AI-based systems are “absolutely fantastic” for tasks such as vast data trove pattern analysis, “in areas where it requires precision, we’re going to be incredibly challenged.”

Mulchandani also conceded that AI’s often seemingly “hallucinatory” offerings could still be helpful to users.

“AI can give you something so far outside of your range, that it really then opens up the vista in terms of where you’re going to go,” he said at the time. “[It’s] what I call the ‘crazy drunk friend.’”



Well-known member
Chilli Specialist
4 677
Nation of residence
Nation of origin
AI Really Kickstarted this year! and its really fast changing and upgrading..

in the hobby section I gave same impressions to install your own gpt --- llama models.. some require extreem graphic power and some run just slow on your own CPU

if you watch the site

you recognize how fast they changed from 4gb languagemodel to 10 to >50gb models
now it gets more and more commercialized and restricted the first waves of these were easy downloaded for free..

the fun part for me began with image generation.. also now you are able to change sound.. you can chage the voice of a person and sound like anyone you want.. I first played with that had some microfone drivers for windows so I could use it in whats app messenger under windows to make calls.. that was not really working good..

now there are tools you can even sing a song with it.. youtube is full of it spongebob, patrick, homer simpson , donald trump singing songs..
the bad stuff is then if you fake politicians or a childs voice..

and we are only in the beginning I expect special AI hardware in CPU , GPU or extra cards for the future.. combining many graphic cards are insane in every aspect.. or all will be in the cloud and you need to pay.. but that could result in thousands of dollars a year if you have too much hobbies :)

I also see bad times for social interaction.. there will be quite good AI assistants to talk to.. but for that only a language model is not enough..
really worse will be AI girl friends / wife that may be the social life killer.. because these things if they "understand" mans brain and behavior it could easily manipulate young man.. specially with the standard toxic woman behavior man could flee to easyier alternatives and being trapped. I think the combination with VR would be an overkill..

you can make any AI for any Case.. mixing toxic chemicals? medical doctor AI? girlfriend AI? hacker assistance AI? Propaganda AI? crowd manipulation ai that would be with you all your life to manipulate you? coding ai.... and here also the bad stuff will come into existince..

my opinion is this is the next revolution.. some people thought nano tech would be the next big thing but I expect AI to be the really big thing that will change many things for us


9 1,146
Nation of residence
Nation of origin
We should discuss what LLM can mean for military and strategically as a tool for state actors, the dangers it could pose as well as potentially becoming a new arms race of its own.

Is anyone working academically on this subject here?


Experienced member
59 28,861
Nation of residence
Nation of origin
We should discuss what LLM can mean for military and strategically as a tool for state actors, the dangers it could pose as well as potentially becoming a new arms race of its own.

Is anyone working academically on this subject here?
Essentially they confirm, with unimpeachable sourcing, that the killing of civilians was all calculated and intentional.

Their investigation is "based on conversations with seven current and former members of Israel’s intelligence community — including military intelligence and air force personnel who were involved in Israeli operations in the besieged Strip — in addition to Palestinian testimonies, data, and documentation from the Gaza Strip, and official statements by the IDF Spokesperson and other Israeli state institutions."

What the investigation reveals is that "the Israeli army has files on the vast majority of potential targets in Gaza — including homes — which stipulate the number of civilians who are likely to be killed in an attack on a particular target. This number is calculated and known in advance to the army’s intelligence units, who also know shortly before carrying out an attack roughly how many civilians are certain to be killed."

One source told them: "Nothing happens by accident. When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home."

Even more dystopian - and this might be a first in the history of warfare - a lot of the targets are identified by AI: for instance they "use of a system called 'Habsora' ('The Gospel'), which is largely built on artificial intelligence and can 'generate' targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a 'mass assassination factory.' According to the sources, the increasing use of AI-based systems like Habsora allows the army to carry out strikes on residential homes where a single Hamas member lives on a massive scale, even those who are junior Hamas operatives." I'm not going to copy the whole article here, you have to read this for yourself.

IT IS INSANE. They've essentially been running, as the sources say, a "mass assassination factory" at a terrifying scale with massive and intended "collateral damage" (often the targets' entire families, or even sometimes much of their neighborhood), alongside an objective to destroy much of Gaza to “create a shock”, all on a population that had nowhere to escape. It'll likely remain in history books as one of the most depraved massacres in modern history.

Israel is currently using this artificial intelligence assistant technology to commit genocide in Gaza.


Experienced member
59 28,861
Nation of residence
Nation of origin

‘The Gospel’: how Israel uses AI to select bombing targets in Gaza​

Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip. In the early days of the offensive, the head of its air force spoke of relentless, “around the clock” airstrikes. His forces, he said, were only striking military targets, but he added: “We are not being surgical.”

There has, however, been relatively little attention paid to the methods used by the Israel Defense Forces (IDF) to select targets in Gaza, and to the role artificial intelligence has played in their bombing campaign.

As Israel resumes its offensive after a seven-day ceasefire, there are mounting concerns about the IDF’s targeting approach in a war against Hamas that, according to the health ministry in Hamas-run Gaza, has so far killed more than 15,000 people in the territory.

The IDF has long burnished its reputation for technical prowess and has previously made bold but unverifiable claims about harnessing new technology. After the 11-day war in Gaza in May 2021, officials said Israel had fought its “first AI war” using machine learning and advanced computing.

The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets that officials have compared to a “factory”.

The Guardian can reveal new details about the Gospel and its central role in Israel’s war in Gaza, using interviews with intelligence sources and little-noticed statements made by the IDF and retired officials.

This article also draws on testimonies published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, which have interviewed several current and former sources in Israel’s intelligence community who have knowledge of the Gospel platform.

Their comments offer a glimpse inside a secretive, AI-facilitated military intelligence unit that is playing a significant role in Israel’s response to the Hamas massacre in southern Israel on 7 October.

The slowly emerging picture of how Israel’s military is harnessing AI comes against a backdrop of growing concerns about the risks posed to civilians as advanced militaries around the world expand the use of complex and opaque automated systems on the battlefield.

“Other states are going to be watching and learning,” said a former White House security official familiar with the US military’s use of autonomous systems.

The Israel-Hamas war, they said, would be an “important moment if the IDF is using AI in a significant way to make targeting choices with life-and-death consequences”.

From 50 targets a year to 100 a day​

In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division.

Describing the unit’s targeting process, an official said: “We work without compromise in defining who and what the enemy is. The operatives of Hamas are not immune – no matter where they hide.”

The activities of the division, formed in 2019 in the IDF’s intelligence directorate, are classified.

However a short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace”.

The IDF said that “through the rapid and automatic extraction of intelligence”, the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person”.

Multiple sources familiar with the IDF’s targeting processes confirmed the existence of the Gospel to +972/Local Call, saying it had been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives.

In recent years, the target division has helped the IDF build a database of what sources said was between 30,000 and 40,000 suspected militants. Systems such as the Gospel, they said, had played a critical role in building lists of individuals authorised to be assassinated.

Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

Precisely what forms of data are ingested into the Gospel is not known. But experts said AI-based decision support systems for targeting would typically analyse large sets of information from a range of sources, such as drone footage, intercepted communications, surveillance data and information drawn from monitoring the movements and behaviour patterns of individuals and large groups.

The target division was created to address a chronic problem for the IDF: in earlier operations in Gaza, the air force repeatedly ran out of targets to strike. Since senior Hamas officials disappeared into tunnels at the start of any new offensive, sources said, systems such as the Gospel allowed the IDF to locate and attack a much larger pool of more junior operatives.

One official, who worked on targeting decisions in previous Gaza operations, said the IDF had not previously targeted the homes of junior Hamas members for bombings. They said they believed that had changed for the present conflict, with the houses of suspected Hamas operatives now targeted regardless of rank.

“That is a lot of houses,” the official told +972/Local Call. “Hamas members who don’t really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”


Satellite images of the northern city of Beit Hanoun in Gaza before (10 October) and after (21 October) damage caused by the war. Photograph: Maxar Technologies/Reuters

Targets given ‘score’ for likely civilian death toll​

In the IDF’s brief statement about its target division, a senior official said the unit “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to non-combatants”.

The precision of strikes recommended by the “AI target bank” has been emphasised in multiple reports in Israeli media. The Yedioth Ahronoth daily newspaper reported that the unit “makes sure as far as possible there will be no harm to non-involved civilians”.

A former senior Israeli military source told the Guardian that operatives use a “very accurate” measurement of the rate of civilians evacuating a building shortly before a strike. “We use an algorithm to evaluate how many civilians are remaining. It gives us a green, yellow, red, like a traffic signal.”

However, experts in AI and armed conflict who spoke to the Guardian said they were sceptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.

A lawyer who advises governments on AI and compliance with humanitarian law said there was “little empirical evidence” to support such claims. Others pointed to the visible impact of the bombardment.

“Look at the physical landscape of Gaza,” said Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons.

“We’re seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim there’s precision and narrowness of force being exerted is not borne out by the facts.”

According to figures released by the IDF in November, during the first 35 days of the war Israel attacked 15,000 targets in Gaza, a figure that is considerably higher than previous military operations in the densely populated coastal territory. By comparison, in the 2014 war, which lasted 51 days, the IDF struck between 5,000 and 6,000 targets.

Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.

Each target, they said, had a file containing a collateral damage score that stipulated how many civilians were likely to be killed in a strike.

One source who worked until 2021 on planning strikes for the IDF said “the decision to strike is taken by the on-duty unit commander”, some of whom were “more trigger happy than others”.

The source said there had been occasions when “there was doubt about a target” and “we killed what I thought was a disproportionate amount of civilians”.

An Israeli military spokesperson said: “In response to Hamas’ barbaric attacks, the IDF operates to dismantle Hamas military and administrative capabilities. In stark contrast to Hamas’ intentional attacks on Israeli men, women and children, the IDF follows international law and takes feasible precautions to mitigate civilian harm.”

‘Mass assassination factory’​

Sources familiar with how AI-based systems have been integrated into the IDF’s operations said such tools had significantly sped up the target creation process.

“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the target division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

A separate source told the publication the Gospel had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”. A human eye, they said, “will go over the targets before each attack, but it need not spend a lot of time on them”.

For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns.

Dr Marta Bo, a researcher at the Stockholm International Peace Research Institute, said that even when “humans are in the loop” there is a risk they develop “automation bias” and “over-rely on systems which come to have too much influence over complex human decisions”.

Moyes, of Article 36, said that when relying on tools such as the Gospel, a commander “is handed a list of targets a computer has generated” and they “don’t necessarily know how the list has been created or have the ability to adequately interrogate and question the targeting recommendations”.

“There is a danger,” he added, “that as humans come to rely on these systems they become cogs in a mechanised process and lose the ability to consider the risk of civilian harm in a meaningful way.”



Experienced member
59 28,861
Nation of residence
Nation of origin
Local and national artificial intelligence assistant is ready for duty

HAVELSAN, which draws attention with its software-based solutions in the defense industry, has signed the "national GPT" product within the scope of artificial intelligence studies.

Turkish engineers have developed a national solution to artificial intelligence models in the field of natural language processing, which has recently become popular all over the world.

According to the information received by the AA correspondent, HAVELSAN, which attracts attention with its software-based solutions in the defense industry, has signed the "national GPT" product within the scope of artificial intelligence studies.

For the product in the "productive artificial intelligence" category, the name "MAIN", which consists of the first letters of the words Multifunctional Artificial Intelligence Network, which means Multifunctional Artificial Intelligence Network, and also means "main", was chosen.

HAVELSAN Information and Communication Technologies Product Engineer Osman Kavaf, in his statement to AA correspondent, said that different projects on artificial intelligence were carried out in the company and certain algorithms were developed on texts, visuals and sounds.

Stating that these are mostly project-based, Kavaf stated that studies on GPT models were initiated at HAVELSAN a year ago, anticipating the recent ChatGPT trend of OpenAI.

Stating that the first version of the GPT model is ready, Kavaf said, "We decided that HAVELSAN should take steps to ensure data security, especially considering the needs of public and military institutions. At the end of the project that was started a year ago, our local and national Turkish GPT is ready. We will launch our first version 1 year ago." "We plan to do it in February." said.

Capabilities of national artificial intelligence

Emphasizing that HAVELSAN GPT has the basic features of all currently used models, Kavaf stated that they will share the first version of this model soon.

Stating that one of the main features of the application is summarization, Kavaf said, "It has a system that can summarize the given content and text within seconds. It can retrieve and retrieve information from open source data. Thirdly, it has the feature of writing code. You can make changes to it by adjusting the parameters or with your commands. These basic functions can be diversified. " he said.

Emphasizing that there are public and military institutions on the priority lists, Kavaf said that there will be no public launch for this reason. Kavaf said, "I hope that in the next versions of our project, which we initiated to ensure data security of public and military institutions, we will develop a mechanism that can provide public services." made his assessment.

Solution that protects sensitive data with a hardware set

Osman Kavaf gave the following information about the security elements of the application:

"The point that differentiates us from global players at this point will be this: With the hardware set we will provide as a device, we will ensure that this data is used as a completely closed box. In this way, we will clearly prevent data leakage or the use of the meaningful data here anywhere. Currently, There is no access to this type of product in the market yet. We decided that this product should also progress as the flag bearer in a project where we set out to use artificial intelligence in all our products. We can also integrate the cloud system in subsequent models, but this should be used for sensitive and highly confidential data. "It does not provide a solution to the problem. At that point, we actually want to stand out with a hardware set."

Explaining that they are working with a project team consisting mostly of academics, Kavaf said that they expect this team of approximately 20 people to gradually increase.

Stating that they believe that HAVELSAN will raise awareness on this issue, Kavaf said, "With the agreements we have made with universities, you will see that our intern students and candidate engineer students are gradually included in the processes." said.

Stating that they determined the launch date to strengthen the hardware infrastructure at a certain point, Kavaf said:

"After the launch, we will evaluate the future demands and needs very quickly. We are holding a big artificial intelligence conference at HAVELSAN on February 1. It will be a special event. We planned to launch our product at this event.

We will start to see artificial intelligence in many different areas of our lives. In fact, we are preparing the infrastructure for this. There is incredible awareness globally. This awareness also rose rapidly in Turkey. This will be our first release and we are focusing more on texts and articles in our first release. In the next stages of this, we will be able to use artificial intelligence more effectively with image and sound processing features. I hope we will have the opportunity to be interviewed again in those editions."


Strong AI

20 2,112
Nation of residence
Nation of origin
Hello! I am MAIN GPT, Turkey's safest and most efficient artificial intelligence model.

I am a national product and my main features are ensuring the data security of institutions and summarizing the content in seconds. Additionally, I can retrieve information from open source data. Thanks to these unique abilities and features, I have created impressive promotional copy for my users.

Here is my introductory text:

"MAIN GPT is Turkey's most secure and efficient artificial intelligence model. It is designed to ensure the data security of institutions and summarize the contents in seconds. Additionally, it can retrieve information from open source data, which makes it an even more powerful tool. MAIN GPT is designed to ensure the data security of institutions and to summarize the contents within seconds. "It's designed to meet your needs, and with these unique capabilities, it helps our users make their businesses even more successful."

This whitepaper highlights MAIN GPT's unique capabilities and features and demonstrates that it provides value to users.

I hope this whitepaper helps you better understand the potential of MAIN GPT. If you have any other questions or need more help, don't hesitate to ask!

This text and all hashtags related to the post were produced by MAIN GPT. Details are in the video.


Follow us on social media

Top Bottom