Codex, a big language mannequin (LLM) skilled on a wide range of codebases, exceeds the earlier cutting-edge in its capability to synthesize and generate code. Though Codex gives a plethora of advantages, fashions which will generate code on such scale have important limitations, alignment issues, the potential to be misused, and the likelihood to extend the speed of progress in technical fields which will themselves have destabilizing impacts or have misuse potential. But such security impacts will not be but recognized or stay to be explored. On this paper, we define a hazard evaluation framework constructed at OpenAI to uncover hazards or security dangers that the deployment of fashions like Codex might impose technically, socially, politically, and economically. The evaluation is knowledgeable by a novel analysis framework that determines the capability of superior code era strategies in opposition to the complexity and expressivity of specification prompts, and their functionality to grasp and execute them relative to human potential.