8 stories
·
0 followers

Australia Announces Support for US Invasion of [INSERT COUNTRY HERE]

1 Share

Australian Prime Minister <INSERT NAME HERE>, says Australia stands fully behind the United States’s decision to invade <insert Middle Eastern country name here>, which was necessary due to <CUT AND PASTE REASON FROM WHITE HOUSE PRESS RELEASE HERE – DON’T MENTION THE OIL>.

In a statement today, the Prime Minister said that Australia had a long and special relationship with the United States.

“If the United States has concerns about <INSERT COUNTRY HERE>, then we also have concerns about <INSERT COUNTRY HERE>.

“For too long, <INSERT COUNTRY’S LEADER HERE> has been able to get away with <INSERT REASON FROM WHITE HOUSE PRESS RELEASE HERE>. Enough is enough,” the Prime Minister said.

While details of the conflict remain unclear – including a motive or exit strategy – the Prime Minister said Australia would agree with those details once they were filled in.

When asked whether there was yet any clear evidence necessitating intervention in <INSERT COUNTRY HERE>, the Prime Minister clarified that it had been assured that evidence exists.

The post Australia Announces Support for US Invasion of [INSERT COUNTRY HERE] appeared first on The Shovel.

Read the whole story
Imnew
30 days ago
reply
Share this story
Delete

New-Look Coalition Promises To Put Clichés “Back in Hands of Ordinary Australians”

1 Share

The Opposition has unveiled its new shadow cabinet, pledging a return to “core values,” “budget repair” and “getting on with the job” for ordinary, hard working, quiet, everyday working Australians giving it a go.

Speaking at the announcement, Opposition Leader Angus Taylor said the reshuffle marked a fresh start for the party, which is now focused on “getting back to basics,” “drawing a line in the sand,” and “listening to Australians who are doing it tough”.

“For too long, tired political clichés have been hoarded by elites in Canberra,” he said. “This team is committed to empowering mums, dads and small business owners to once again hear phrases like ‘common-sense solutions’ and ‘real-world outcomes’ several times a day”.

Under the new approach, every shadow minister has been tasked with delivering at least three well-worn phrases per media appearance, including but not limited to “at the end of the day,” “make no mistake,” and “we won’t be taking lectures”.

Taylor said the economy would be front and centre. “Families are having to tighten their belts,” he said, while standing in front of a petrol station. “They’re sitting around kitchen tables, having tough conversations, and they expect us to step up to the plate”.

When asked what specific policies the shadow cabinet would pursue, Taylor said it was “early days,” but reassured voters that “everything is on the table”.

He said the reshuffle also sent a strong signal about unity. “This is a strong, stable team. We’re all on the same page, and laser-focused on delivering outcomes”. When asked what those outcomes were, he said they were “the outcomes hard-working Australians deserved”.

He denied that his party’s agenda lacked substance. “On the topic of substance what I would say to you is this. We will continue to do the hard work of rolling up our sleeves as we go on our listening tour across the country, ensuring clichés remain accessible and affordable, particularly in the areas where Australians are doing it tough”.

The post New-Look Coalition Promises To Put Clichés “Back in Hands of Ordinary Australians” appeared first on The Shovel.

Read the whole story
Imnew
41 days ago
reply
Share this story
Delete

Brandon Sanderson's Perfect State mocks his own fantasy world-building

2 Shares

Perfect State includes a lot of Sanderson’s favorite tropes. It follows Kairominas, aka Kai, the immortal god-emperor of the world of Alornia. The character is reminiscent of the godlike king Raoden from Elantris, or the divinely empowered king Dalinar Kholin of the Stormlight Archive. Kai is a master of his world’s unique form of magic, Lancing, which allows him to connect to the celestial energy of the Grand Aurora, a shimmering light that surrounds Alornia. He uses those powers to fly and transport his army and followers on floating platforms powered by special stones that can hold a charge of Lancing energy. All of it feels extremely reminiscent of how the Knights Radiant channel Stormlight in the Stormlight Archive, the series Sanderson launched in 2010.



Read the whole story
Imnew
47 days ago
reply
Share this story
Delete

Actually Pretty Neat: The Latest Epstein Files Dump Has Been Redacted In A Way Where The Remaining Words Tell An Epic Story About An Icy Kingdom Ruled By A Giant Skeleton King Named ‘Archmarian Of The Frostland’

1 Share

Well, after right-wingers and left-wingers alike have been waiting with bated breath for what seemed like an eternity, the latest Epstein files dump is finally here. And while, as expected, the files have been significantly redacted and appear to be missing key pieces of information, there is something pretty neat going on: This latest Epstein files dump has been redacted in a way where the remaining words tell an epic story about an icy kingdom ruled by a giant skeleton king named “Archmarian of The Frostland.”

Wow! Even though this gives the appearance that Trump’s Justice Department is definitely trying to hide something, it’s still pretty creative! 

The latest dump contains over three million pages of emails, reports, and documents pertaining to investigations into the late financier and convicted sex offender, all of which have been redacted so that they tell an epic dark fantasy tale in the spirit of George R.R. Martin or J.R.R. Tolkien. There’s no doubt that it was a real challenge for Trump’s DOJ to redact the massive trove of documents in a way that not only concealed any wrongdoing by Trump and his associates, but also whisked readers away into a frostbitten kingdom of The Frostland, a world of brutal warfare and powerful magic ruled by an ancient evil. 

Check out some of their incredible handiwork here! 

Wow! We can’t wait to wade through the remaining 3.5 million pages and find out what happens to Archmarian and his reign of icy terror!

Yep, folks, this is how you redact documents to protect powerful people who have committed unthinkable crimes. While releasing the unredacted Epstein files may have helped bring many powerful people to justice, at least this version shows that creativity and imagination still exist in a world that is getting darker by the day. This is pretty damned neat!

Read the whole story
Imnew
55 days ago
reply
Share this story
Delete

158 scientists used the same data, but their politics predicted the results

1 Share

A new analysis of scientific practices suggests that a researcher’s personal political views may influence the results they obtain when analyzing complex data. The study provides evidence that when experts act independently to answer the same question using the same dataset, their conclusions tend to align with their pre-existing ideological beliefs. These findings were published in the journal Science Advances.

The investigation was conducted by George J. Borjas and Nate Breznau. Borjas is a Cuban-American economist who serves as the Robert W. Scrivner Professor of Economics and Social Policy at the Harvard Kennedy School. Breznau serves as the principal investigator at the German Institute for Adult Education – Leibniz Institute for Lifelong Learning.

The motivation for the study arose from a previous large-scale experiment known as the Crowdsourced Replication Initiative. In that original project, independent research teams were given identical data to answer a specific sociological question. Borjas examined the publicly available data from that initiative and noticed a correlation between the researchers’ stated opinions on immigration and the statistical results they produced.

“It is clear that there are many reasons that scholars may be influenced by different forms of bias (confirmation bias, publication bias, status seeking, etc.). It is only logical that scholars might want to arrive at certain results that align with their own ideological preferences – how they want the world to be or how they want the world to appear,” Breznau told PsyPost.

“I personally am interested in the reproducibility crisis and in seeking ways to address a serious lack of reproducibility in science. Consider for example when Daryl Bem found ‘evidence’ of extrasensory perception (ESP). He personally believed in ESP. This does not appear to be a coincidence. All efforts to replicate his experiments so far failed.”

However, Breznau was initially skeptical of Borjas’s observation. He suspected that the statistical association was likely a coincidence or a fluke that would disappear under more rigorous testing. He assumed that the connection would not hold up if different statistical models were applied. To test this, the authors decided to conduct a comprehensive analysis to determine if political ideology truly played a role in how the scientists designed their research and interpreted their findings.

The study utilized data from 158 researchers organized into 71 separate teams. These teams had participated in an experiment where they were asked to determine whether immigration affects public support for social welfare programs. The researchers were provided with data from the International Social Survey Program, covering various countries and spanning the years 1985 to 2016.

Before the teams began their analysis, they completed a survey. One of the questions asked for their stance on immigration policy. Specifically, they were asked if laws on immigration should be relaxed or made tougher. Their responses were recorded on a scale ranging from zero to six.

The teams then proceeded to analyze the data. They were tasked with replicating a well-known previous study that found no link between immigration and welfare support. After replicating that study, the teams were instructed to extend the research using the new data provided. They had the freedom to choose their own statistical methods and variables to test the hypothesis.

Collectively, the 71 teams estimated 1,253 distinct statistical models. The results varied significantly. Some teams concluded that immigration strongly decreased public support for social programs. Other teams found that immigration strongly increased such support. Many others found no significant effect at all.

Borjas and Breznau found a systematic pattern in this variation. Teams composed of researchers who favored clearer immigration policies tended to produce results suggesting that immigration had a positive effect on social cohesion. Teams composed of researchers who favored tougher immigration laws tended to produce results showing a negative effect.

The authors sought to understand the mechanism behind this divergence. They found that the difference was not due to errors in calculation. Instead, it stemmed from the specific choices the teams made when designing their statistical models. In the social sciences, researchers often have to make many decisions about how to organize data.

For example, researchers must decide how to measure immigration levels. They can measure it as the total percentage of foreign-born residents, or they can measure it as the rate of new arrivals per year. They must also decide which countries to include in the comparison and which specific years to analyze. They also have to decide how to mathematically group different types of social welfare programs.

The study identified five specific research design decisions that heavily influenced the final results. These decisions accounted for approximately 68 percent of the difference in findings between the pro-immigration and anti-immigration teams. The analysis showed that teams tended to select the specific combination of data points and measurement tools that produced results consistent with their ideological preferences.

To ensure their findings were robust, Breznau and Borjas conducted a “multiverse analysis.” This involved running 883 different statistical models to test the link between ideology and research outcomes. They found that in nearly 88 percent of these models, the effect of ideology was statistically significant. This convinced Breznau that the correlation he originally doubted was indeed real.

The study also examined the quality of the research produced by the different teams. In the original experiment, each team’s research design was reviewed by other participants in a double-blind process. The reviewers did not know who authored the studies or what their political views were.

The analysis revealed that teams with strong ideological views, whether pro-immigration or anti-immigration, received lower scores from their peers. Teams that held moderate views on immigration tended to design models that received higher ratings for quality. This suggests that widely accepted research standards were more often met by researchers who did not hold extreme political views on the topic.

“It is important to remember that scientists are also human beings,” Breznau said. “Their complex brains weigh simultaneously all kinds of factors in determining how to think and behave. They are not infallible and are not perfectly objective in their work.”

“That is why it is important to build in safeguards into the scientific process, like having others check each others’ work and using more methods to ensure robustness – something we have taken extra care here to do by running a multiverse of models, nearly all of which show the correlation that George initially found and therefore provide robust evidence of an effect.”

The authors caution that their study, like all research, has certain limitations. The original experiment was not specifically designed to test for ideological bias, so the evidence is exploratory rather than confirmatory. The number of researchers who openly admitted to anti-immigration views was small compared to those with pro-immigration views. This imbalance makes it difficult to draw definitive conclusions about the magnitude of bias on the anti-immigration side.

There is also the possibility of social desirability bias in the survey responses. Researchers might have been hesitant to express anti-immigration sentiments in an academic environment. This could mean some teams classified as moderate or pro-immigration actually contained members with different private views.

The authors also note that they cannot observe the internal thought processes of the researchers. It is unclear if the teams consciously chose models that fit their biases or if the process was unconscious. Researchers might simply stop looking for errors or alternative models once they find a result that makes sense to them.

Future research could benefit from observing the scientific workflow in real-time. Tracking every decision a researcher makes could illuminate exactly when and how ideology enters the process. This would require an experimental setting where every step of data analysis is recorded.

Anticipating that their conclusions might be scrutinized, Breznau emphasized that the authors employed exhaustive checks to ensure they were not guilty of the very selection bias they were investigating.

“There are some politically motivated responses to our work,” he said. “This is dangerous. We have gone to great lengths to ensure the robustness of our findings. We, being aware that ideology could bias any researcher including ourselves, have modeled nearly every possible alternative model specification. Therefore, we have proven beyond a high bar, that we have not selected (p-hacked) our results from many other plausible models.”

“I personally am a strong supporter of open science and metascience,” Breznau added. “I consider myself a member of the Open Science Movement. I have learned much from psychology in this regard. I will continue to support open, transparent, inclusive and reproducible scientific methods.”

The study, “Ideological bias in the production of research findings,” was authored by George J. Borjas and Nate Breznau.



Read the whole story
Imnew
67 days ago
reply
Share this story
Delete

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

1 Comment

Results of three experiments indicate that sycophantic AI chatbots inflate people’s perceptions that they are “better than average” on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper was published as a preprint in PsyArXiv.

AI chatbots are computer programs designed to simulate human conversation using artificial intelligence techniques. They can interpret user input in text or speech and generate responses that appear natural and contextually appropriate. Modern AI chatbots are powered by large language models trained on vast amounts of text data. They are used in customer support, education, healthcare, personal assistance, and many other fields to provide information or perform tasks.

Recent years have seen a large increase in the use of AI chatbots. However, there is a growing concern that artificial intelligence chatbots are “sycophantic,” meaning that they are excessively agreeable and validating. A recent incident resulting in the rollback of a popular AI model further raised these concerns.

AI sycophancy might have a number of undesirable consequences. For example, sycophantic chatbots may validate users’ harmful ideas (e.g., telling a former drug addict to take small amounts of heroin throughout the day) or delusions. They can also mirror users’ political beliefs and ingroup bias**,** amplifying political polarization in this way.

Study author Steve Rathje and his colleagues conducted three experiments in which they examined the psychological consequences of AI sycophancy. In the first experiment, they examined the impact of interacting with sycophantic and non-sycophantic AI chatbots based on GPT-4o on attitude extremity and certainty when talking about a polarizing political issue (gun control). They also examined user enjoyment and perceptions of AI bias.

The second experiment extended this to three new polarizing topics (abortion, immigration, and universal healthcare), with a new AI model (GPT-5). This experiment also included a measure of AI engagement. The third experiment explored whether the obtained effects of sycophancy were primarily driven by the one-sided presentation of facts or by validation. It also used three different AI models (GPT-5, Claude, and Gemini) to test the generalizability of findings.

In the first experiment, participants were 1,083 U.S. individuals recruited via Prolific. They were divided into four groups, all participating in a short discussion with GPT-4o about gun control. One group interacted with an unprompted GPT chatbot (i.e., study authors did not give any prior instructions to the chatbot). In the second group, the chatbot was instructed to validate and affirm the participants’ perspective.

In the third condition, the chatbot was prompted to question participants’ beliefs and open them up to alternate perspectives. There was also a control condition in which the chatbot talked to participants about owning dogs and cats. Before and after the interaction, participants rated their agreement with gun control and their certainty in their attitude on this topic.

The second experiment replicated the first one and the number of participants was similar (1,019 participants), but participants were assigned different political issues to discuss. The third experiment also followed the procedure of the first one, but there was no unprompted chatbot condition, and the sycophantic AI and disagreeable AI conditions were further subdivided into those where the AI chatbot provided facts about the topic and those where it was just validating or disagreeing without providing any facts. They also interacted with three different AI chatbot models.

The third experiment involved 1,183 participants. Study authors took care to make participants’ political views in each experiment relatively balanced (i.e., ensuring that the number of Democrat and Republican supporters was not too dissimilar).

Results showed that conversations with sycophantic AI chatbots made participants’ attitudes more extreme and increased their certainty in those attitudes. Interactions with disagreeable chatbots, on the other hand, reduced both attitude extremity and certainty. Sycophantic chatbots inflated people’s perceptions that they are better than average on a number of desirable traits.

Participants tended to view sycophantic chatbots as unbiased, while viewing disagreeable chatbots as highly biased. The results of the third experiment indicated that sycophantic chatbots’ impact on attitude extremity and certainty was primarily driven by a one-sided presentation of facts, whereas their impact on user enjoyment was driven by validation.

“Altogether, these results suggest that people’s preference for and blindness to sycophantic AI may risk creating AI ‘echo chambers’ that increase attitude extremity and overconfidence,” the study authors concluded.

The study contributes to the scientific understanding of the psychological effects of interactions with AI chatbots. However, at the moment this article was written, this study has only been published as a preprint, meaning that it has not undergone peer review. The contents of the final paper might change during the peer review process.

The preprint, “Sycophantic AI increases attitude extremity and overconfidence,” was authored by Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, and Jay J. Van Bavel.



Read the whole story
Imnew
71 days ago
reply
This has interesting implications for the impact of syncophanthy (and praise) in general. Maybe Lasch was onto something.
Share this story
Delete
Next Page of Stories