PC

Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI’s safety researchers keep leaving

Shifting corporate priorities, Superalignment, and safeguarding humanity: Why OpenAI’s safety researchers keep leaving

A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving.

To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern.

Original Source Link

Related Articles

Back to top button