On May 22, 1984, a viewer sat with paper, pen, and a sealed-envelope protocol that reads like it belongs in a stage magician’s handbook. The target was not a bridge, a submarine pen, or a hostage house. The envelope, once opened, reportedly named something far stranger: “The planet Mars,” with a “time of interest approximately 1 million years B.C.” and coordinates written in degrees north and west. (Internet Archive)
Whether you read that as an intelligence artifact, a psychological Rorschach, or an accidental confession that U.S. agencies kept one foot in the metaphysical, the document has an undeniable quality: it is not a rumor. It is a paper trail.
Project STAR GATE, often shorthand-labeled “the CIA remote viewing program,” is best understood the same way. Not as a single mythic unit of “psychic spies,” but as a long-running, multi-agency experiment that left behind budgets, task counts, scoring rubrics, evaluation memos, and an internal debate that never truly ended. (irp.fas.org)
This is an investigative walk through what the record actually shows, where it fractures, and why STAR GATE still shadows modern UAP discourse.

By the numbers
If STAR GATE is approached like a case file rather than a campfire story, several metrics anchor the discussion:
- Total spend: “Some $20 million” over “more than two decades,” with “$11 million budgeted from the mid-1980’s to the early 1990s.” (irp.fas.org)
- Human footprint: “Over forty personnel” over time, including “about 23 remote viewers,” peaking at “as many as seven full-time viewers” in the mid-1980s. (irp.fas.org)
- Operational demand (documented in evaluation material): from 1986 to the first quarter of FY1995, the program received “more than 200 tasks” from operational military organizations. (alice.id.tue.nl)
- Scored operational sample: by May 1, 1995, three remaining viewers produced work on 40 tasks from five operational organizations, yielding 99 “accuracy” scores and 100 “value” scores from the taskers. (alice.id.tue.nl)
- Score distribution (from that scored sample):
- Accuracy (6-point scale where “1” is most accurate): 13 of 99 scores were “1” (about 13%), and 55 clustered in the “2–3” range (about 56%). (alice.id.tue.nl)
- Value (5-point scale where “1” is highest): there were zero “1” scores; 11 were “2” (11%); 80 clustered in “3–4” (80%). (alice.id.tue.nl)
- Bottom-line conclusion from the operational tasking evaluation inside the 1995 review package: the “utility of RV for operational intelligence collection cannot be substantiated.” (alice.id.tue.nl)
Those numbers do not “prove” remote viewing. They do something more important for investigators: they constrain the argument. STAR GATE can’t honestly be framed as either a tiny one-off curiosity or an unlimited, blank-check psychic program. The record points to a modest but persistent effort, repeatedly rebranded, repeatedly evaluated, and repeatedly controversial. (Internet Archive)
A program with many names, and a reason it kept changing
“STAR GATE” was not the beginning. It was an umbrella label applied after years of precursor efforts and bureaucratic reshuffling.
A key public compilation of the program lineage describes an initial CIA-funded thread beginning with SCANATE (“scan by coordinate”) around 1970, followed by remote viewing research at Stanford Research Institute (SRI) beginning in 1972. (irp.fas.org)
From there, the trail moves into Army intelligence space: GONDOLA WISH (1977), GRILL FLAME (formalized mid-1978 at Fort Meade), then CENTER LANE (1983). In 1985, when Army funding ended, the unit was redesignated SUN STREAK and transferred to DIA’s Scientific and Technical Intelligence Directorate. In 1991, under DIA auspices, the work transitioned to Science Applications International Corporation (SAIC) and was renamed STAR GATE. (irp.fas.org)
Why does this matter?
Because name changes are usually a signal. In intelligence culture, renaming is often what you do when you want to keep an activity alive while changing who owns the risk. STAR GATE’s history reads like a relay race where the baton is the same uncomfortable question:
If there is any reliable “anomalous cognition,” can it be operationalized without destroying the credibility of the institutions touching it?
Even CIA’s own internal public-affairs talking points from 1995 leaned into this framing. The memo prepared ahead of the American Institutes for Research (AIR) panel review emphasized that CIA was “not currently developing or using parapsychology,” while also acknowledging that earlier CIA work funded SRI “during about 1972 to about 1977.” (Internet Archive)
In other words: yes, it happened. No, we are not admitting it means what you think it means.
The origin story: Soviet psychotronics and Cold War fear economics
The most consistent justification for early U.S. interest is the Soviet angle.
One public summary states that between 1969 and 1971, U.S. intelligence sources concluded the USSR was engaged in “psychotronic” research, with suggested spending of roughly “60 million rubles per year” by 1970 and “over 300 million by 1975.” (irp.fas.org)
Investigatively, you can treat those figures as either accurate estimates or institutional folklore. Either way, they functioned as budget lubricants: the claim that an adversary was investing heavily in mind-based capabilities made it politically easier to fund a domestic test, even if leadership considered it speculative. (irp.fas.org)
And “speculative” is not a throwaway word here. CIA’s own prepared language for media inquiry describes the early program as “considered to be speculative.” (Internet Archive)
The SRI phase: when physicists, protocols, and “viewers” collided
The SRI chapter is the most culturally famous: Russell Targ and Hal Puthoff in Menlo Park, running early experiments and working with individuals presented as “gifted,” including Ingo Swann. (irp.fas.org)
This era is also where STAR GATE picked up some of its enduring baggage. A public compilation notes that some early participants had ties to Scientology, a detail that later critics used to frame the work as less “classified science” and more “government-funded occultism.” (irp.fas.org)
The key point is not the social controversy, it is the protocol ambition: SRI-era work helped seed the idea that remote viewing could be structured into repeatable stages and taught, turning “psychic talent” into something closer to a method. The same compilation describes Swann and Puthoff developing instructions intended to allow broader trainability, a conceptual bridge toward Controlled Remote Viewing (CRV). (irp.fas.org)
That “teachability” claim is one of STAR GATE’s central fault lines. If remote viewing is purely an individual anomaly, you recruit rare talents and protect them. If it is trainable, you can scale it, operationalize it, and potentially integrate it into routine collection.
The program spent decades living in that tension.
The operational phase: what tasking actually looked like
Operational remote viewing is often narrated as a highlight reel: dramatic “hits,” hidden submarines, missing hostages. The declassified evaluation material paints something more bureaucratic and more constrained.
In the operational tasking evaluation embedded in the 1995 AIR package, the program is described as receiving more than 200 tasks from 1986 through early FY1995. The tasks were designed to identify “targets” with as little specificity as possible to avoid “telegraphing” the response, meaning the remote viewers were often asked to “access and describe target” under deliberately vague framing. (alice.id.tue.nl)
Several structural problems immediately appear in that description:
- Vague tasking is protective, but it also makes outputs hard to score.
- When you can’t score reliably, you drift toward narrative validation: “this feels like it fits.”
- Narrative validation is exactly where intelligent analysts become most vulnerable to confirmation bias.
The evaluation text practically says this in institutional language, warning that ambiguity and subjectivity create “a need for additional efforts” of “questionable operational return” for analysts who must interpret the material. (alice.id.tue.nl)
The report also notes something investigators should not miss: until 1994, results were not evaluated by tasking organizations “by any numerical method” that would identify accuracy and value, aside from occasional narrative comments. Scoring arrived late, near the end, when the program was already on the edge of termination. (alice.id.tue.nl)
That is not a small detail. It means the program ran for years without a consistent performance dashboard, which makes retrospective claims, pro or con, much easier to argue and much harder to settle.
The scorecard: how taskers rated what they got
By May 1, 1995, the Star Gate program office had implemented a numerical evaluation method and collected scoring on 40 tasks. This produced 99 accuracy scores (6-point scale, with “1” most accurate) and 100 value scores (5-point scale, with “1” highest value). (alice.id.tue.nl)
The distribution is revealing:
- Accuracy scores clustered around “2” and “3” (55 of 99), with 13 “1” scores. (alice.id.tue.nl)
- Value scores clustered around “3” and “4” (80 of 100), with no “1” scores at all, and 11 “2” scores. (alice.id.tue.nl)
So even when taskers sometimes judged content as relatively accurate, they did not judge it as highly valuable.
If you want the STAR GATE debate in a single sentence, it is that gap: perceived accuracy did not translate into operational leverage.
And the evaluation team’s final wording is blunt: they conclude that the utility of remote viewing “cannot be substantiated,” and they say the intelligence value “simply cannot be discerned.” (alice.id.tue.nl)
The lab debate inside the 1995 review: Utts versus Hyman
The AIR report is often cited because it contains, in one binding, a structured clash between two epistemologies.
Jessica Utts, a statistician, reviewed the experimental evidence and wrote: “It is clear to this author that anomalous cognition is possible and has been demonstrated,” explicitly framing the conclusion as based on “commonly accepted scientific criteria,” not belief. (alice.id.tue.nl)
Ray Hyman, a psychologist and well-known critic of parapsychology claims, took a different position. In the same package, he argued that earlier SRI remote viewing research “suffered from methodological inadequacies,” and he emphasized the core scientific norm that extraordinary claims require independent replication. (alice.id.tue.nl)
Importantly, Hyman’s skepticism in this review is not the lazy version. He acknowledges that the effect sizes reported in the SAIC experiments were “too large and consistent” to be dismissed as statistical flukes, while still disputing whether those effects justify concluding “anomalous cognition” is established as a causal reality. (alice.id.tue.nl)
If you want to see how the lab portion looks as numbers, Utts includes effect-size tables for SAIC-era experiments. For example, one remote viewing experiment listed an effect size of .124 (± .071) with p = 0.040. (alice.id.tue.nl)
That kind of result is statistically suggestive but operationally ambiguous: small effects, high noise, and a huge interpretive burden when you try to convert them into decisions with real-world consequences.
Which is why the operational evaluation’s conclusion hits so hard: even if something non-chance is happening in the lab, the program could not reliably turn it into intelligence value. (alice.id.tue.nl)
The 1995 inflection: Congress pushes, CIA reviews, the public learns
In 1995, the program’s fate was no longer only an internal question.
A public program summary states that the FY1995 Defense Appropriations bill directed that the program be transferred to CIA and instructed CIA to conduct a retrospective review. (irp.fas.org)
CIA’s internal memo prepared for public-affairs handling aligns with that: it says CIA’s panel review of past CIA and DIA remote viewing programs was in response to a “Congressionally Directed Action,” and that a “blue ribbon panel” at AIR would assist. (Internet Archive)
The same public summary notes that AIR’s final report was released publicly on November 28, 1995, and that AIR recommended termination, with CIA concluding there was no case where ESP provided data used to guide intelligence operations. (irp.fas.org)
Even here, the paper trail has layered messaging: “statistically significant effect” in lab contexts, but “no actionable operational utility” in intelligence contexts. (irp.fas.org)
STAR GATE was not “debunked” in one clean kill shot. It was administratively ended because the cost-benefit case collapsed under review criteria that mattered to decision-makers.
Where UAP enters the file
STAR GATE is not, in the strict sense, a “UAP program.” It is a consciousness-and-collection program that sometimes brushed targets that look, from the outside, like UAP-adjacent content.
The record gives at least three entry points:
1) Explicit target theming in declassified session material
The “Mars Exploration” session is the clearest example of how far target theming could go, regardless of what you believe about the output. It is framed as an envelope-based target protocol with Mars and a deep-time date. (Internet Archive)
This is not “UAP evidence.” But it is evidence that the program’s task space could include extraordinary targets that overlap with the same conceptual territory as nonhuman intelligence narratives.
2) A declassified session file labeled as a UAP-style incident
Within the CIA’s STAR GATE collection listings, at least one “remote viewing session data” item is titled “UFO incident” (legacy terminology). (cia.gov)
Again, that does not authenticate the session’s conclusions. But it does establish that at least one session was framed around an anomalous aerial incident as its target theme.
3) Personnel and conceptual bleed into later UAP discourse
Even if STAR GATE was ended, the human network did not disappear. Public-facing media over the last few years has brought STAR GATE figures back into mainstream conversation, often explicitly connecting remote viewing, consciousness research, and UAP as a single continuum of “edge-of-government” inquiry.
The Shawn Ryan Show’s interviews with Skip Atwater and Joe McMoneagle are current examples of this re-surfacing, with episode titles and descriptions that explicitly place “psychic operations” alongside “alien encounter” framing and remote viewing of Mars. (Spotify)
A separate cultural artifact, the documentary Third Eye Spies, centers the SRI-era story and its intelligence sponsorship, further cementing STAR GATE’s association with UAP-adjacent communities, even when the original mission was broader intelligence collection. (YouTube)
In practical terms: STAR GATE’s most durable “UAP involvement” may be sociological, not operational. It helped normalize the idea that consciousness anomalies belong on the same table as advanced aerospace anomalies, especially for audiences who see UAP as a multi-domain phenomenon that includes human perception and cognition. (UAPedia – Unlocking New Realities)
The cast: scientists, viewers, evaluators, and the bureaucrats in the middle
A data-first investigation still needs names, because STAR GATE is as much about institutional behavior as it is about psi claims.
- Russell Targ and Harold Puthoff: central to the early SRI research that helped define the field of “remote viewing” inside government-funded contexts. (irp.fas.org)
- Ingo Swann: repeatedly described as an early “gifted” figure and linked to the development of training methodology narratives. (irp.fas.org)
- Edwin May: associated with the later contractor-era work and described in one summary as presiding over a large share of contractor budget and data collection during the SAIC phase. (irp.fas.org)
- Jessica Utts and Ray Hyman: paired evaluators in the AIR package, representing a structured confrontation between pro-psi statistical interpretation and skeptical scientific replication norms. (alice.id.tue.nl)
- AIR / CIA Office of Research and Development: the bureaucratic machinery that turned a classified controversy into a publishable retrospective evaluation under congressional pressure. (Internet Archive)
If you want the most revealing STAR GATE “character,” though, it is not any single person. It is the program manager who, in the operational evaluation description, gives the remote viewers only “rudimentary information” on a tasking sheet. (alice.id.tue.nl)
That design choice is where idealism meets tradecraft, and where failure modes multiply.
The controversies that actually matter
STAR GATE has accumulated endless noise: jokes, caricatures, credulous retellings. The controversies that matter are more specific, and more consequential.
Scoring and validation
The operational evaluation makes the core critique explicit: the ambiguous and subjective nature of the process creates extra analytic burden, and the intelligence value cannot be discerned. (alice.id.tue.nl)
This is not a philosophical dismissal. It is an operational one.
Late adoption of quantitative evaluation
Not implementing a consistent numerical scoring approach until 1994 is a serious institutional vulnerability. It makes it difficult to defend the program rigorously and equally difficult to condemn it fairly, because the data architecture needed to settle the argument was built at the end, not the beginning. (alice.id.tue.nl)
Lab effects versus field usefulness
Even within the AIR package, you can see how two honest evaluators can look at the same literature and diverge: Utts argues anomalous cognition is demonstrated; Hyman argues the methodological and replication standards are not met to infer causality. (alice.id.tue.nl)
Meanwhile, the operational tasking evaluation says the real-world utility cannot be substantiated. (alice.id.tue.nl)
If you treat STAR GATE as a “product,” it failed. If you treat it as “research,” the evidence remains contested.
Institutional stigma and compartmentalization
CIA’s public-affairs memo is effectively a stigma-management document: it anticipates media inquiry, emphasizes “CIA is not currently developing or using parapsychology,” and frames the review as compelled by Congress. (Internet Archive)
This matters because stigma shapes how programs are staffed, how results are reported, and how rigor is enforced. STAR GATE lived under a permanent credibility tax.
Management and morale degradation
A public summary claims that by the early 1990s the program was plagued by uneven management, poor morale, divisiveness, and few accurate results. (irp.fas.org)
Even if you discount that summary as secondary, it aligns with what one would expect from a long-running unit with ambiguous validation, high stigma, and shifting ownership.
The media afterlife: books, podcasts, and the new public STAR GATE
STAR GATE did not end in 1995. It changed jurisdiction again: from classified program to cultural engine.
Here are some useful entry points that are part of the current STAR GATE information ecosystem:
Books and documents
- The foundational government evaluation package: Mumford, Rose, & Goslin’s AIR report, which includes both expert reviews (Utts and Hyman) and operational tasking evaluation material. (alice.id.tue.nl)
- Paul H. Smith’s book Reading the Enemy’s Mind: Inside Star Gate, listed as a key resource in a public program summary. (irp.fas.org)
Podcasts and long-form interviews
- The Shawn Ryan Show #154: Skip Atwater, explicitly framed around remote viewing, Mars, and “psychic operations.” (Spotify)
- The Shawn Ryan Show #95: Joe McMoneagle, directly framed as “CIA’s Project Stargate” in platform listings. (Spotify)
- That UAP Podcast premium episode with Joe McMoneagle (February 2026 listing), showing how STAR GATE has been absorbed into modern UAP media circuits. (Apple Podcasts)
- Joe Rogan Experience #2314 with Hal Puthoff (platform listing), representing how the SRI-era network continues to surface in mainstream podcast culture. (YouTube)
Documentary
- Third Eye Spies, a modern documentary vehicle for the SRI and intelligence-sponsorship story. (YouTube)
Investigatively, this afterlife matters because it changes incentives. In the classified era, the incentive was utility. In the media era, the incentive is narrative velocity.
Those incentives produce different “truth selection.”
A hard question for UAP research: what would a modern STAR GATE test look like?
UAP science is slowly moving toward standardized data pipelines. STAR GATE was the opposite: a secret program trying to turn subjective impressions into intelligence.
If you want to connect STAR GATE to modern UAP inquiry responsibly, the correct move is not to treat remote viewing as a shortcut to disclosure. It is to define what would count as a legitimate result.
A credible modern protocol, if anyone insisted on re-testing the question, would likely need:
- Pre-registered targets and scoring rules
- Independent target randomization with secure escrow
- Time-stamped submissions with cryptographic hashing
- Blind judging by multiple independent teams
- Strict separation between taskers, monitors, and analysts
- Transparent error accounting, not just “hits”
The AIR package demonstrates why. Without careful controls, you end up with “accuracy” that does not translate into “value,” and a process that forces analysts to do interpretive labor that looks suspiciously like pattern-matching under uncertainty. (alice.id.tue.nl)
For UAP, that distinction is everything: the field is already drowning in ambiguous stimuli. Adding another ambiguity generator does not solve the problem unless it produces verifiable, constraint-satisfying, independently replicable outputs.
Bottom line
STAR GATE is not a fairy tale. It is a rare thing: a long-duration, government-touched exploration of anomalous cognition that left behind enough documentation to be audited.
The audit yields a paradox:
- The lab literature reviewed in 1995 includes statistically non-chance effects that serious evaluators argue about in good faith. (alice.id.tue.nl)
- The operational evaluation concludes the intelligence utility cannot be substantiated, with user value scores showing no top-tier usefulness in the scored sample. (alice.id.tue.nl)
- UAP enters the STAR GATE story less as a confirmed mission and more as an overlap zone: declassified session theming (Mars, “UFO incident” as legacy terminology), and a cultural network that later reappears in UAP-facing media. (Internet Archive)
If you want a single investigative takeaway, it is this:
STAR GATE’s greatest legacy may not be what it “saw,” but what it revealed about institutions. When faced with the possibility of a real anomaly, governments do not only deny or embrace. They commission panels, change code names, relocate ownership, build just enough structure to keep the program alive, and then terminate it when structure fails to justify utility.
That pattern should feel familiar to anyone tracking UAP today.
Claims Taxonomy
Verified
- CIA funded remote viewing research at SRI “during about 1972 to about 1977,” per CIA internal declassification Q&A talking points. (Internet Archive)
- The STAR GATE lineage included multiple code names across CIA, Army INSCOM, and DIA, and transitioned to SAIC in 1991 under DIA auspices in public summaries. (irp.fas.org)
- From 1986 through early FY1995, the program received “more than 200 tasks,” and in 1994–1995 a subset of 40 tasks was numerically evaluated, yielding documented accuracy and value score distributions. (alice.id.tue.nl)
- The operational tasking evaluation concluded the “utility of RV for operational intelligence collection cannot be substantiated.” (alice.id.tue.nl)
- Declassified material exists showing at least one session themed as “Mars Exploration” with a deep-time framing. (Internet Archive)
- CIA’s STAR GATE collection listings include a session file titled “UFO incident” (legacy terminology for UAP). (cia.gov)
Probable
- STAR GATE’s public and private afterlife substantially shaped how UAP-adjacent communities frame “consciousness” as part of the broader anomalous phenomena problem, as evidenced by modern high-reach interviews explicitly linking remote viewing, Mars, and “alien encounter” framing. (Spotify)
Disputed
- Remote viewing represents a genuine anomalous cognitive capability rather than methodological artifacts, cueing, or unknown confounds. (The AIR package contains strong disagreement between expert reviewers on inference and causality.) (alice.id.tue.nl)
- STAR GATE produced actionable intelligence value that materially influenced major operations. (Operational evaluation disputes substantiation; public summaries claim CIA concluded there was no case where ESP guided operations.) (alice.id.tue.nl)
- UAP-related session theming in the archive implies accurate perception of nonhuman technology or UAP actors. (Existence of themed sessions is documented; accuracy of extraordinary interpretations is not established by the titles alone.) (cia.gov)
Legend
- Not applicable as a single “folklore case,” but STAR GATE is surrounded by legend-making in pop culture. The analysis above relies on declassified/evaluative documents and current media listings rather than legend retellings. (alice.id.tue.nl)
Misidentification
- Not applicable in the classic “misidentified object” sense, but the operational evaluation warns that ambiguity and subjectivity can produce persuasive but low-value outputs that risk interpretive overreach. (alice.id.tue.nl)
Hoax
- No sufficient evidence in the cited evaluation material to label STAR GATE itself as a deliberate hoax; it is documented as a funded, evaluated set of programs. (Internet Archive)
Speculation Labels
Hypothesis
If UAP involves a consciousness component, then the historic persistence of remote viewing programs could be a weak institutional proxy for a real, low-signal anomaly that standard sensors did not capture well in earlier decades.
Witness Interpretation
Some STAR GATE-era and post-STAR GATE participants interpret “hits” (or themed sessions like Mars) as evidence of contact with nonhuman intelligence, but those interpretations exceed what the declassified task framing alone can prove. (Internet Archive)
Researcher Opinion
The most productive way to revisit STAR GATE in a UAP context would be strict modern pre-registration, independent replication, and transparent scoring designed to prevent narrative validation, because the 1995 operational evaluation shows how easily “accuracy” decouples from “value.” (alice.id.tue.nl)
References
Mumford, M. D., Rose, A. M., & Goslin, D. A. (1995). An evaluation of remote viewing: Research and applications. American Institutes for Research. (alice.id.tue.nl)
Central Intelligence Agency. (1995, June 12). CIA Remote Viewing Declassification Q & A (CIA-RDP96-00791R000100030062-7). (Internet Archive)
Federation of American Scientists. (2005, December 29). STAR GATE [Controlled Remote Viewing]. (irp.fas.org)
Central Intelligence Agency. (1984, May 22). Mars Exploration (declassified session document). (Internet Archive)
Shawn Ryan Show. (2025, January 2). #154 Skip Atwater – Bizarre Alien Encounter, Remote Viewing Mars and Psychic Operations (podcast episode listing/transcript). (Spotify)
Shawn Ryan Show. (2024, February 5). #95 Joe McMoneagle – CIA’s Project Stargate (podcast episode listing). (Spotify)
That UAP Podcast. (2026, February 14). (Premium) Joe McMoneagle: Project Stargate (podcast episode listing). (Apple Podcasts)
The Black Vault (2018-25) The STAR GATE collection (FOIA and Declassified Documents. (The Black Vault)
CIA Reading Room from 1966 on (FOIA)
SEO keywords
CIA Stargate Project, Project STAR GATE remote viewing, SCANATE GRILL FLAME SUN STREAK, SRI remote viewing program, Jessica Utts Ray Hyman AIR evaluation, Fort Meade psychic spies, declassified remote viewing documents Mars exploration, remote viewing UAP incident file, consciousness and UAP, Joe McMoneagle podcast, Skip Atwater Shawn Ryan Show, Third Eye Spies documentary