In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.
At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon. That is a physical limit based on the wavelength of light.
The best we could do is build a space-based optical interferometer with two nodes 450 kilometers apart, but synchronized to 1 wavelength. That's a really tough engineering challenge.
We can do better than that! Using the Sun as a gravitation lens[1], and a probe at a focal point of 542 AU, we could get 25km scale surface resolution on a planet 98 ly away. [2] This would be an immense and time-consuming endeavor, but does seem to be within humanity's current technological capabilities.
There are also alternative proposals to use Earth's atmosphere refraction for focusing, in a geometrically similar fashion as gravitational lens. It seems more feasible than using Sun's gravitational lensing.
Yes, the larger the object you're using as a lens, the better the image. This is due to the 'Lens Makers' Equation'. Larger objects like Earth, Jupiter, or the Sun would make for larger radii and therefore better resolution.
You have a "telescope" with a field of view of one-planets worth of pixels. But the planet is in orbit, so it drifts away from the imaged field of view within minutes.
Meanwhile your sensor is travelling away from the "lens" so transverse velocity would be needed to track the orbit at a delta-v and direction that is unknowable. Unknowable, because you have to know where the planet is, within a radius, to put your "sensor" in the right place in the first place.
Imagine taking a straw, place it in a tree, walk away a few km and focus a telescope on the straw and hope to look through the straw to see an airplane flying past. You have the same set of unknowables.
I won't argue that it would be worth the effort, but it would be interesting to set something like that going and just keep scanning. A few years worth of data might turn up interesting things even if it wasn't particularly useful for finding those things a second time.
A maintenance-free power source capable of lasting the 200 or so years it would take to make it to 542 AU does not seem within humanity's current technological capabilities.
Parker at its highest velocity could make it there in a century, but it doesn't have to slow down and stop. Or station keep.
When we have a power source that can do 5kW (I just doubled Hubble, 542 AU would probably require much more for communications) for 100 years I'll agree that its design can be refined and its lifespan extended to 200 and 542 AU is within our reach.
With distances that big, is it even necessary to slow down much? The depth of focus is probably a couple dozen AU? Even if it takes the probe a century to get there, if you can squeeze a decade or two of observation out of it without slowing down, there's no reason to bother and instead send a new upgraded telescope every decade or so.
As far as power requirements go, assuming a doubled power demand from Hubble might be a bit excessive. A telescope that far out would have to be nuclear powered, so thermal regulation is 'free'/passive and RCS load is reduced (don't have to constantly adjust to point away from the Earth), which I expect are the biggest power draws on Hubble.
If we assume a 150 year lifetime, with a 3kW draw by EOL and current RTG tech... RTGs have ~6% efficiency, so for 3kW electricity, you need 50kW in heat. RTG electricity output drops ~2% per year, so after 150 years, you have 5% of the initial electrical output, and you get ~0.57W/g of Pu-238. Meaning, you need ~600kg of it to power the telescope this way [https://www.mathscinotes.com/2012/01/nuclear-battery-math/].
That's not a politically feasible amount, but it's not technically impossible with current/near future tech whose development could be spurred on by serious interest in this kind of mission.
'Proper' fission reactors can also do the job, you get higher efficiency and don't have to run the reactors for the entire 150 years besides accounting for decay (e.g. an RTG that needs to provide enough power to keep some clocks running, the electronics and batteries warm, and trigger whatever mechanism would start up the reactor). Probably less than 100kg of Pu-238 just by better reactor efficiency.
It is indeed spherical frictionless cow-ly possible if we spend a trillion dollars to increase ORNL's annual Pu production capacity so that it doesn't take 200 years to make 600kg of Pu-238.
When someone demonstrates a complex device (let's set aside power generation how about a valve? Or a capacitor?) that can last a century in space I'll agree that it is actually possible.
That's what "current level of technology" means. The lego bricks exist, now, today, preferably in stock ready for immediate shipment on Digikey, and can be snapped into place.
Wouldn't there be a problem putting 600kg (or even 100kg) of Pu-238 together, because of supercriticality? I couldn't think of a plausible design, but I know next to nothing about this area. Basically I've heard that if you put a lot of this stuff together it'll make a big explosion
Criticality isn't hard to avoid, just split it between e.g. 344 units arranged in a 7x7x7 cube with 10cm gaps each way. Or more, I picked that separation and mass division based on guessing.
i don't think modern semiconductor device will last more than 100 years, even without all the radiation. making something last more than a few decades is very hard.
Project Orion-type space craft can archive 1000 km/s and can travel within 3 years 542 AU. And this is absolutely feasible technically, just not politically.
You’re never going to break into popular science reporting with that sort of attitude. If you are going to do the scale of a small thing, you have to compare it to the size of a banana or the width of a hair if it’s very small. For larger things, “football pitches” are the standard, although “blue whales” and “double-decker busses” are also acceptable units in some circumstances.
So, for scale, Voyager 1 is about 2.5 x 10^11 regulation football pitches away although they vary in size so it could be anywhere between 2.08 x 10^11 and 2.8 x 10^11. Now, see how much more relatable that is for a common person?
By “delta-v” I mean propellant budget, not initial velocity. So you spend half your delta-v to accelerate out and the other half to decelerate.
But of course, the initial delta-v costs a lot of propellant because it has to push an almost full tank. By the time we have to decelerate the ship will be a lot lighter.
That’s why you needed a full Saturn 3rd stage to send Apollo to the moon, but just the service module to get back to Earth.
I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.
> I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.
Voyager 1 is 166 AU away, it launched about 50 years ago. So wouldn't we just have to do about twice as well as that, or launch 2 of them in opposite directions? That sounds _very_ hard (Voyager is amazing), but it can't be beyond our current tech, right? We did fairly close to that 50 years ago.
> At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon.
Or use two (or more) telescopes that are 450km apart:
It's a lot easier to reason about this using angular resolution, because that's normally what the diffraction limit formula is in reference to. If you know the angular diameter of the system (α) and the wavelength (say λ=500 nm for visible), you can use α ≈ λ/d and solve for the aperture of the telescope (d).
That puts a basic limit on the smallest thing you can resolve with a given aperture. You can use the angular diameter of the planet and the resolution you're after. For Alpha Centauri A it's 8.5 milli arc-second, so O(1 μas) for a 100px image? That's just for the star!
The Event Horizon Telescope can achieve around 20-25 μas in microwave; you need a planet-scale interferometer to do that. https://en.wikipedia.org/wiki/Event_Horizon_Telescope It's possible to do radio measurements in sync with good clocks and fast sampling/storage, much harder with visible.
I'm not super up to date on visible approaches, but there is LISA which will be a large scale interferometer in space. The technology for synchronising the satellites is similar to what you'd need for this in the optical.
How far off are we still for doing this with visual light?
Let's say you build single photon detectors and ultra precise time stamping. Would that get us near?
Today, maybe we don't have femtosecond time stamping and detectors yet. But that is something I can imagine being built! Timing reference distribution within fs over 100s of km? Up to now, nobody needed that I guess.
The biggest issue is the sheer separation required. EHT operates in mm wave light, visible is 4-6 orders of magnitude shorter wavelength. There are several smaller scale interferometers. They can already do quite impressive things because even a 50m baseline is better than any optical telescope that exists.
The way that timing works for EHT is each station has a GPS reference that's conditioned with a very good atomic clock - for example at SPT we use a hydrogen maser. The readout and timing system is separate from the normal telescope control system, we just make sure the dish is tracking the right spot before we need to start saving data (sampling around 64 Gbps).
I'm not sure what the timing requirements are for visible and how the clock is distributed, but syncing clocks extremely well over long distances shouldn't be insurmountable. LISA needs to solve this problem for gravitational waves and that's a million+ km baseline.
Some problems go away in space. You obviously need extremely accurate station keeping (have a look how LISA Pathfinder does it, very cool), but on Earth we also have to take continental drift into account.
Is there another limit in terms of just: how many photons from X object even hit an area of Y telescope apeture size from distance Z in like, say a year? We can't see the thing if no photons from it even intersect our telescope, right? Or maybe that limit is way way less restrictive than the other...
The number of photons themselves is not too restrictive (i think the voyager probe still emits 6ish photons per second directed at the receiving dish). And we easily build sensors that detect every photon (far above 99% levels). The tricky part will be differentiating between “source photons” and “background photons” (for Voyager we exactly know what to look for, here we wouldn’t have any baseline for distinguishing)
It's linear, so if it is 25 times closer then the telescope can be 25 times smaller. At 4.37 light-years we'd need an 18 kilometer telescope to image at Jupiter-sized planet at 100x100 pixel resolution.
If you only wanted 10x10 resolution you could get by with a 1.8 kilometer telescope.
It would be really cool to have an array of space-based telescopes spaced out evenly in the Earth's orbit around the sun, and use each as relay for the others that cannot directly communicate with Earth, because the path is blocked by the Sun.
Then you could do observations outside the solar system's orbital plane with a 2 AU synthetic aperture. And maybe even do double duty as a gravitational wave observatory.
(And yes, this is currently more science fiction than science, but it's at least plausible that we can build such a thing one day).
Even a single pixel in the IR range is pretty cool, but something inside me wants the RGB pixel color in visible light range.
Is that a case of un redshifting this pixel, or needing the optical inferometer you mentioned with multiple single frequency filters.
Or something new? like a LHC style accelerator, or space based rail gun, to fire off a continuous stream of tiny cube sats towards the target, and using the stream itself as a comms channel back.
Yeah I know, this planet is burning, and all that effort for a RGB wallpaper seems crazy, but 'space stuff' also brings knowledge and hope.
> At its most sensitive state, LIGO will be able to detect a change in distance between its mirrors 1/10,000th the width of a proton! This is equivalent to noticing a change in distance to the nearest star (some 4.2 light years away) of the width of a human hair.
So I think two telescopes at 450km distance synchronized to "merely" (haha) a visible light's wavelength should be doable, if we throw a fuckton of money on that.
If you drop the requirement that the image has to be taken with wavelengths our eyes are sensitive to, you could image it using radio telescopes. We already have this capability, the problem though with radio interferometry is that while you can get an effectively huge aperture, the contrast level will be very low, and I am guessing that after subtracting the signal from the star, the signal from the planet will not be above the noise level. Note that optical interferometers would have the same problem.
My (tenuous) understanding of interferometry is that you receive light from two points separated by a baseline and then combine that light in such a way that the wavelengths match up and reinforce at appropriate points.
>In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.
the image on the linked website is more than 1 pixel across: what are you saying? it's false/fake?
The resolution of the image (the ability to resolve two points) is greater than the size of the planet, thus it appears as a point spread function, no detail can be resolved.
Synchronization is solvable, and why stop at two? You could have a three-dimensional array of them, spread over very large distances. We have the technology now to pull this off.
I thought modern telescopes use software to merge images across a period of time / from multiple telescopes to get a significantly higher resolution than that achieved through the physical limitation of light. At least that’s how all the spy telescopes work and how various ground based telescopes collaborate afaik.
That’s in addition to gravitational lensing effects.
Take this even further and it eliminates a whole bunch of possible explanations for the Fermi Paradox.
If, like me, you believe the future of any civilization (including ours) is a Dyson Swarm then you end up with hundreds of millions of orbitals around the Sun between, say, the orbits of Venus and Mars. It's not crowded either. The mean distance between orbitals is ~100,000km.
People often ask why would anyone do this? Easy. Two reasons: land area (per unit mass) and energy. With 10 billion people, that'd be land about the size of Africa each with each person having an energy budget of about the solar output hitting the Earth, a truly incomprehensibly large amount of energy.
So instead of a telescope 450km wide (fia optical interferometry), you have orbitals that are up to ~400 million kilometers apart. The resolution with which you could view very distance worlds is unimaginably high.
Why does this eliminate Fermi Paradox proposed solutions? One idea is that advanced civilizations hide. There is no hiding from a K2 civilization.
Yet another reminder that space is huge and no matter how big we can imagine, due to the realities of physics, there is a good chance that we might never be able to reach the far stars and galaxies.
The depressing, if that's the right word, counterpoint to all the "oh my god it's fun of stars" deep fields crammed with millions of galaxies per square arcsecond is that the expansion of the universe means that nearly all of them are permanently and irrevocably out of reach even with near-lightspeed travel: they'll literally wink out of observable reality before we could ever get to them, leaving only a few nearby galaxies in the sky. At best you can reach the handful of gravitationally-bound galaxies in the local group.
Not that the Milky Way is a small place, but even most sci-fi featuring FTL and all sorts of handwaves has to content itself with shenanigans confined to a single galaxy due to the mindblowing, and accelerating, gaps between galaxies.
It's a shame, but in a glass-falf-full sense the fact that this planet is our little boat in the ocean and all that we got is also a quite helpful focusing reminder and scope constraint.
That the stars are beyond reach might be depressing, how aggresively we are gambling our little boat is on the other hand actively scary and perhaps the dominant limit on humanity's effective reach.
There was an article I saw about how long it would take the fastest spacecraft built with "non-speculative" physics - phenomena that has actually been observed in labs or in nature, ignoring any manufacturing and budget infeasibility (as in no handwaving sci-fi) and we're still talking like an entire lifetime to the next star.
In a way we're kind of still like an ancient village who can only travel by boats made of reeds
Unlikely. There are both economical and moral reasons to never build a self replicating robotic fleet of probes. I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems.
Just to ring the point home, we are technically (but not yet economically) capable of creating small telescopes which use our sun as a gravitational lens, which would be able to take photographs of exoplanets. In the far future we could potentially build very large telescopes which can do the same and see very distant objects with a fine resolution. That would be a much better investment then to send out self replicating robotic probes.
"There are both economical and moral reasons to never build a self replicating robotic fleet of probes."
Such as?
" I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems."
What part of "immortal" don't you understand? traveling at 1% of c doesn't feel slow if you just turn off or slow down your brain during the trip.
I would expect that the probe makers would want some benefits from the fleet of probes they sent, the only benefit I can think of to be had are information about far away objects, which is of scientific value. The probe’s makers will therefor have to keep contact with an ever expanding fleet of probes and sift through an exponentially increasing amount of information for millions of years. This just does not seem practical when you can just build a telescope. Now time may not pass that slowly from the perspective of the probe, but for the civilization on the homeworld, this method is painfully slow. They could have built thousands or millions of telescopes during that time to gather the same information (albeit of lower quality). Which is why you would probably want to probe your nearest neighboring solar systems, but nothing farther.
As for the moral reasons to not send out a fleet of self replicating probes. These are an extreme pollution hazard. An ever expanding fleet of robots traveling across the galaxy over millions of years, growing in numbers exponentially, exploiting resources in foreign worlds, with nothing to stop them if something happens to their makers. Over millions of years these things would be everywhere, and—in the best case—be a huge nuisance, but at worse they would be a risk to the public safety of the worlds they travel to. With these risks I believe a sufficiently advanced civilization would just build telescopes for their exploration needs.
You don't understand. The "probes" WOULD BE the creators. Biological life is far too fragile to survive interstellar travel but AI running on much more durable hardware makes it downright easy.
And they wouldn't have to be inherently self-replicating.
When you can live millions of years your idea of what is "slow" changes pretty drastically.
> What an appropriate name for an astrophysicist. I wonder if she's distantly related to the namesake of the Lagrange point.
Scopus has 390 profiles of people named Lagrange. It is not a very popular family name but it is not uncommon either and some of them are bound to end up in academia, whether they are descendants of Joseph-Louis or not.
I've been bearish on the JWST in the past. I've thought it an investment in science that could have been made better by waiting a bit for cheaper heavy lift and advances in computational imaging.
However, this is the culmination of the construction of a cathedral to science. Every stone laid one atop another from our first comprehension of the cosmos to our emergence from our long dream as the center of a deity constructed universe has resulted in a discipline that can not only conceive of other spheres we can stand on, to entire other systems of spheres we can now see.
The key word "discovery" has been removed from the headline from TFA: "The James Webb Space Telescope Reveals Its First Direct Image Discovery of an Exoplanet". I.e, this is the first time that direct imagery was used to _discover_ a planet we didn't know existed previously.
Submitted title was "James Webb Space Telescope reveals its first direct image of an exoplanet", which I'm sure was just a good-faith attempt to fit HN's 80 char title limit. I've achieved that by compressing to JWST now :)
> Although there is a slight possibility that the newly detected infrared source might be a background galaxy
I understand the difficulty in what they are doing, but the scale of the error here is amusing. “We thing we took a picture of something, but it might have been billions of things much bigger but further away”
Orbital mechanics, orbital period, and minimum determinable arc of JWST.
Though another thought is that doppler might also reveal velocity, if a spectrum could be obtained. Since the system is nearly perpendicular to the Solar System (we're viewing it face-on rather than from the side), those shifts will be small.
The JWST is a marvel of engineering. It is also a machine designed around the restrictions of what the most powerful rockets of the 1990's were capable of. Just imagine how capable future telescopes will be now that we have multiple super-heavy launch vehicles with cavernous payload fairings in development.
My fantasy is that at some point we’ll have a sufficiently powerful telescope to cause a galactic “Van Leeuwenhoek moment” where, just like that discoverer of microbes, we will suddenly see the galaxy swarming with spacecraft.
No? I genuinely think most of the world will have moved on and will be caring about something else within a day, the world will be about as chaotic and tumultuous as it was shortly after the discovery of microbes.
Yes, and too bad a twin or two weren't developed simultaneously, as the additional cost would be minimal - and now we have SpaceX rockets to launch them.
it's hard to commit to building JWST type of payload around a non-yet proven launcher. you'd want to wait until the "in development" becomes proven before planning to launch some decadal planned mission.
Another cool thing is that this technique is biased towards planets far from their star, because it is easier to see a planet the further away from their bright star.
In contrast, current techniques are biased towards close-in planets. Both Doppler-shift and light-curve methods tend to detect close-in planets.
We’ll get a better idea of the distribution of planets with both techniques.
> To further support their observations, Lagrange and her colleagues ran computer models that visualized the potential planetary system. The simulations yielded images that aligned with the ones captured by the telescope. “This was really why we were confident that there was a planet,”
Don’t get me wrong, I love that we are doing this work and have no reason to doubt that this is indeed an exoplanet image, but I view this kind of modelling as a pretty weak form of support for a hypothesis. Models are built from assumptions, which are influenced by expectations. They are not data.
This is super exciting. It seems possible to one day receive higher resolution images of this type of find. Perhaps someone who is more familiar with this subject can opine.
The moment we have our first, direct-observation photo of an earth-like exoplanet will be a defining point in our history.
The Nancy Grace Roman Space Telescope is supposed to have even better coronagraph as a technology demonstrator. They keep finding ways to improve on the technology.
If it's allowed to continue, which seems very shakey at the moment. NASA's would from DOGE will result in projects - even mostly completed one - being trashed.
> In April 2025, the second Trump administration proposed to cut funding for Roman again as part of its FY2026 budget draft. This was part of wider proposed cuts to NASA's science budget, down to US$3.9 billion from its FY2025 budget of US$7.5 billion. On April 25, 2025, the White House Office of Management and Budget announced a plan to cancel dozens of space missions, including the Roman Space Telescope, as part of the cuts.
That will be done with a solar gravitational lens - there's a recent-ish NASA paper about it. Basically you send your probe to > 550 AU in the opposite direction of your target exoplanet, point it at the Sun and you will get a warped high-res photo of the planet around the Sun. You can then algorithmically decode it into a regular photo.
I think the transit time is likely decades and the build time is also a long time as well. But in maybe 40-100 years we could have plentiful HD images of 'nearby' exoplanets. If I'm still around when it happens I will be beyond hyped.
this is one of those where a missed alignment is going to be a huge bummer. 550AU * arcseconds is a long way off looking not at what you wanted. you wouldn't know until you were at minimum distance which is going to take generations to achieve. voyager 1 is only ~166AU and that was >40 years. so if you try to nudge your coarse, how many more generations would it be before it was aligned correctly?
I really liked the image a lot so I emailed the author of the paper to see if she had a version without the clipart,she didn't but said it was fine to remove it, so: https://s.h4x.club/YEuYLW8z (doesn't render tiffs I guess, so hit download)
It’s been truly fascinating to is f_p from the Drake equation go from a guess of maybe 0.5 as an upper bound to an increasingly confident 1 in my lifetime.
So presumably they'll be able to take another photograph in a year or two and the planet will have visibly moved? (Jupiter's orbital period around the Sun is about 12 years, but this planet is about 10 times further from the star and has an estimated orbital period of 550 years.)
Not sure if you're joking, but in case you're not - the star at the center is usually so bright that its light drowns out the light of anything nearby. In such cases, the star is covered so that the dimmer objects nearby are visible.
How is it that we can spot a planet 110 light years away, but whether there’s another planet in the solar system past Pluto is a matter of legitimate scientific debate?
Because exoplanets by definition are going to be found adjacent to stars, which limits the area you need to search. Planets are fairly common, so you don't need to look at that many stars before you find evidence of an exoplanet, provided you have a good-enough telescope.
A hypothetical planet beyond Pluto be in a huge part of the sky: Presumably the orbit of such a planet could be inclined about as much as Pluto's. The 17-degree inclination of Pluto's orbit means it could be in a 34-degree wide strip of the sky, which, if I'm doing my math right, is about 29% of the full sky. If we allow for up to a 30 degree inclination, then that's half the sky.
There's also the matter of object size and brightness. The proposed Planet Nine[1] was supposed to be a few hundred AU away, and around the mass of 4 or 5 Earths. The object discovered in this paper is around 100 M🜨, at around 52 AU from its star. Closer and larger. (Of course, there's a sweet spot for exoplanet discovery, where you want the planet to be close enough to be bright, but far enough away to be outside the glare of the star.)
The paradox is explained by different detection methods: exoplanets like this one glow in infrared and are directly visible against the black of space, while Planet Nine would be extremely dim, non-glowing, and lost in the cluttered background of our galaxy's disk.
And imagine that the only reason, the ONLY reason, they haven’t completely blown us away, is because our planet happens to be one of the very rare planets where the ratio of the size of our moon and earth is in such a way that you can witness a total solar eclipse as a black hole in the sky once a year, and they would like to witness this event someday.
What if FTL is not possible? In that case the attack will take a long time to reach us, and in the meantime we will be much more advanced technologically and could potentially defend ourselves.
In sci-fi we see warp drives, worm hole travel, phasers, photon torpedos and energy shields around ships. But what if none of that is possible? In that case, we might even have the technology to defend ourselves today if we manage to detect the attack in time.
It's a huge risk for a civilization to attack us. Even if they have capabilities that are beyond our technology, there might still be limitations based on the laws of physics. And if they attack us, they risk a response.
In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.
At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon. That is a physical limit based on the wavelength of light.
The best we could do is build a space-based optical interferometer with two nodes 450 kilometers apart, but synchronized to 1 wavelength. That's a really tough engineering challenge.
We can do better than that! Using the Sun as a gravitation lens[1], and a probe at a focal point of 542 AU, we could get 25km scale surface resolution on a planet 98 ly away. [2] This would be an immense and time-consuming endeavor, but does seem to be within humanity's current technological capabilities.
1. https://en.wikipedia.org/wiki/Solar_gravitational_lens
2. https://www.nasa.gov/general/direct-multipixel-imaging-and-s...
There are also alternative proposals to use Earth's atmosphere refraction for focusing, in a geometrically similar fashion as gravitational lens. It seems more feasible than using Sun's gravitational lensing.
https://en.wikipedia.org/wiki/Terrestrial_atmospheric_lens
Does size of the planet matter? How about using Saturn or Jupiter?
Yes, the larger the object you're using as a lens, the better the image. This is due to the 'Lens Makers' Equation'. Larger objects like Earth, Jupiter, or the Sun would make for larger radii and therefore better resolution.
Wouldn't be worth the trouble to try.
Why, you ask?
How do you point it? Where do you point it?
You have a "telescope" with a field of view of one-planets worth of pixels. But the planet is in orbit, so it drifts away from the imaged field of view within minutes.
Meanwhile your sensor is travelling away from the "lens" so transverse velocity would be needed to track the orbit at a delta-v and direction that is unknowable. Unknowable, because you have to know where the planet is, within a radius, to put your "sensor" in the right place in the first place.
Imagine taking a straw, place it in a tree, walk away a few km and focus a telescope on the straw and hope to look through the straw to see an airplane flying past. You have the same set of unknowables.
I won't argue that it would be worth the effort, but it would be interesting to set something like that going and just keep scanning. A few years worth of data might turn up interesting things even if it wasn't particularly useful for finding those things a second time.
A maintenance-free power source capable of lasting the 200 or so years it would take to make it to 542 AU does not seem within humanity's current technological capabilities.
Parker at its highest velocity could make it there in a century, but it doesn't have to slow down and stop. Or station keep.
When we have a power source that can do 5kW (I just doubled Hubble, 542 AU would probably require much more for communications) for 100 years I'll agree that its design can be refined and its lifespan extended to 200 and 542 AU is within our reach.
With distances that big, is it even necessary to slow down much? The depth of focus is probably a couple dozen AU? Even if it takes the probe a century to get there, if you can squeeze a decade or two of observation out of it without slowing down, there's no reason to bother and instead send a new upgraded telescope every decade or so.
As far as power requirements go, assuming a doubled power demand from Hubble might be a bit excessive. A telescope that far out would have to be nuclear powered, so thermal regulation is 'free'/passive and RCS load is reduced (don't have to constantly adjust to point away from the Earth), which I expect are the biggest power draws on Hubble.
If we assume a 150 year lifetime, with a 3kW draw by EOL and current RTG tech... RTGs have ~6% efficiency, so for 3kW electricity, you need 50kW in heat. RTG electricity output drops ~2% per year, so after 150 years, you have 5% of the initial electrical output, and you get ~0.57W/g of Pu-238. Meaning, you need ~600kg of it to power the telescope this way [https://www.mathscinotes.com/2012/01/nuclear-battery-math/].
That's not a politically feasible amount, but it's not technically impossible with current/near future tech whose development could be spurred on by serious interest in this kind of mission.
'Proper' fission reactors can also do the job, you get higher efficiency and don't have to run the reactors for the entire 150 years besides accounting for decay (e.g. an RTG that needs to provide enough power to keep some clocks running, the electronics and batteries warm, and trigger whatever mechanism would start up the reactor). Probably less than 100kg of Pu-238 just by better reactor efficiency.
I agree with you.
It is indeed spherical frictionless cow-ly possible if we spend a trillion dollars to increase ORNL's annual Pu production capacity so that it doesn't take 200 years to make 600kg of Pu-238.
When someone demonstrates a complex device (let's set aside power generation how about a valve? Or a capacitor?) that can last a century in space I'll agree that it is actually possible.
That's what "current level of technology" means. The lego bricks exist, now, today, preferably in stock ready for immediate shipment on Digikey, and can be snapped into place.
Wouldn't there be a problem putting 600kg (or even 100kg) of Pu-238 together, because of supercriticality? I couldn't think of a plausible design, but I know next to nothing about this area. Basically I've heard that if you put a lot of this stuff together it'll make a big explosion
Criticality isn't hard to avoid, just split it between e.g. 344 units arranged in a 7x7x7 cube with 10cm gaps each way. Or more, I picked that separation and mass division based on guessing.
i don't think modern semiconductor device will last more than 100 years, even without all the radiation. making something last more than a few decades is very hard.
Considering that the longest continually operating computer is in Voyager 2 and has been running for nearly 50 years I would be surprised if this was actually a problem. https://www.guinnessworldrecords.com/world-records/635980-lo...
Does encasing electronics in lead help against high energy cosmic rays? With cheap kg to orbit one could assume the mass budget would be large.
Project Orion-type space craft can archive 1000 km/s and can travel within 3 years 542 AU. And this is absolutely feasible technically, just not politically.
> A maintenance-free power source capable of lasting the 200 or so years it would take to make it to 542 AU
It wouldn't take nearly that long. The proposal is to use solar sails. There is a nice video about the details on YouTube: https://www.youtube.com/watch?v=NQFqDKRAROI
[deleted]
It then would have to brake..
Or just keep launching more so there’s always a usable one
For scale, Voyager 1 is about 167 AU away.
You’re never going to break into popular science reporting with that sort of attitude. If you are going to do the scale of a small thing, you have to compare it to the size of a banana or the width of a hair if it’s very small. For larger things, “football pitches” are the standard, although “blue whales” and “double-decker busses” are also acceptable units in some circumstances.
So, for scale, Voyager 1 is about 2.5 x 10^11 regulation football pitches away although they vary in size so it could be anywhere between 2.08 x 10^11 and 2.8 x 10^11. Now, see how much more relatable that is for a common person?
Smoots https://en.wikipedia.org/wiki/Smoot
We should definitely use TeraSmoots more as an astronomically unit.
I think Tipping of the Cool Worlds youtube channel did a video that we can just use earth for the gravitational lensing and that would be far cheaper
https://m.youtube.com/watch?v=jgOTZe07eHA
I was going to post the same exact thing and links.
Of all the possible space probes or missions we could do. I want this one more than any of them!
Do we have a recent cost estimate?
I'd guess less then 1 or 2 hyped AI startup valuations that eventually collapse to nothing.
Those are just financial transactions though, not actual loss of much engineering time etc.
ouch I thought I was cynical
Thank you for the chuckle.
And more importantly, a story points estimate (t-shirt sizing is obviously XL)
Lets get an epic ticket ready.
"We used to look up at the sky and wonder at our place in the stars. Now we just look down, and worry about our place in the dirt."
It's cynical to assume OP was gunning for "it's too expensive". They might just want to know the size of the challenge to get it done.
And it's ironic to scold others for missing a point while missing their point. All good though.
I missed it too. What was your point?
Agreed! This might be easier than an interferometer. You just need a lot of delta-v
How do you decelerate once you get there though?
By “delta-v” I mean propellant budget, not initial velocity. So you spend half your delta-v to accelerate out and the other half to decelerate.
But of course, the initial delta-v costs a lot of propellant because it has to push an almost full tank. By the time we have to decelerate the ship will be a lot lighter.
That’s why you needed a full Saturn 3rd stage to send Apollo to the moon, but just the service module to get back to Earth.
I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.
> I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.
Voyager 1 is 166 AU away, it launched about 50 years ago. So wouldn't we just have to do about twice as well as that, or launch 2 of them in opposite directions? That sounds _very_ hard (Voyager is amazing), but it can't be beyond our current tech, right? We did fairly close to that 50 years ago.
> At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon.
Or use two (or more) telescopes that are 450km apart:
* https://en.wikipedia.org/wiki/Aperture_synthesis
* https://www.nature.com/articles/ncomms7852
Do these scientist know they can just say “enhance”?
As someone who’s sat in meeting with nontechnical people and having heard this exact request (“can’t you just enhance the image?”) I felt this.
How big would the telescope/mirror/lens need to be to get a picture of something in the Alpha Centauri system, 4.37 light years away?
Also, could the image be created by “scanning” a big area and then composing the image from a bunch of smaller ones?
It's a lot easier to reason about this using angular resolution, because that's normally what the diffraction limit formula is in reference to. If you know the angular diameter of the system (α) and the wavelength (say λ=500 nm for visible), you can use α ≈ λ/d and solve for the aperture of the telescope (d).
That puts a basic limit on the smallest thing you can resolve with a given aperture. You can use the angular diameter of the planet and the resolution you're after. For Alpha Centauri A it's 8.5 milli arc-second, so O(1 μas) for a 100px image? That's just for the star!
The Event Horizon Telescope can achieve around 20-25 μas in microwave; you need a planet-scale interferometer to do that. https://en.wikipedia.org/wiki/Event_Horizon_Telescope It's possible to do radio measurements in sync with good clocks and fast sampling/storage, much harder with visible.
I'm not super up to date on visible approaches, but there is LISA which will be a large scale interferometer in space. The technology for synchronising the satellites is similar to what you'd need for this in the optical.
https://www.edmundoptics.com/knowledge-center/application-no...
https://arxiv.org/abs/astro-ph/0303634
How far off are we still for doing this with visual light?
Let's say you build single photon detectors and ultra precise time stamping. Would that get us near? Today, maybe we don't have femtosecond time stamping and detectors yet. But that is something I can imagine being built! Timing reference distribution within fs over 100s of km? Up to now, nobody needed that I guess.
The biggest issue is the sheer separation required. EHT operates in mm wave light, visible is 4-6 orders of magnitude shorter wavelength. There are several smaller scale interferometers. They can already do quite impressive things because even a 50m baseline is better than any optical telescope that exists.
The way that timing works for EHT is each station has a GPS reference that's conditioned with a very good atomic clock - for example at SPT we use a hydrogen maser. The readout and timing system is separate from the normal telescope control system, we just make sure the dish is tracking the right spot before we need to start saving data (sampling around 64 Gbps).
I'm not sure what the timing requirements are for visible and how the clock is distributed, but syncing clocks extremely well over long distances shouldn't be insurmountable. LISA needs to solve this problem for gravitational waves and that's a million+ km baseline.
Some problems go away in space. You obviously need extremely accurate station keeping (have a look how LISA Pathfinder does it, very cool), but on Earth we also have to take continental drift into account.
Is there another limit in terms of just: how many photons from X object even hit an area of Y telescope apeture size from distance Z in like, say a year? We can't see the thing if no photons from it even intersect our telescope, right? Or maybe that limit is way way less restrictive than the other...
The number of photons themselves is not too restrictive (i think the voyager probe still emits 6ish photons per second directed at the receiving dish). And we easily build sensors that detect every photon (far above 99% levels). The tricky part will be differentiating between “source photons” and “background photons” (for Voyager we exactly know what to look for, here we wouldn’t have any baseline for distinguishing)
It's linear, so if it is 25 times closer then the telescope can be 25 times smaller. At 4.37 light-years we'd need an 18 kilometer telescope to image at Jupiter-sized planet at 100x100 pixel resolution.
If you only wanted 10x10 resolution you could get by with a 1.8 kilometer telescope.
Wikipedia has more: https://en.wikipedia.org/wiki/Angular_resolution. The Rayleigh criterion is the equation to calculate this.
It would be really cool to have an array of space-based telescopes spaced out evenly in the Earth's orbit around the sun, and use each as relay for the others that cannot directly communicate with Earth, because the path is blocked by the Sun.
Then you could do observations outside the solar system's orbital plane with a 2 AU synthetic aperture. And maybe even do double duty as a gravitational wave observatory.
(And yes, this is currently more science fiction than science, but it's at least plausible that we can build such a thing one day).
Even a single pixel in the IR range is pretty cool, but something inside me wants the RGB pixel color in visible light range.
Is that a case of un redshifting this pixel, or needing the optical inferometer you mentioned with multiple single frequency filters.
Or something new? like a LHC style accelerator, or space based rail gun, to fire off a continuous stream of tiny cube sats towards the target, and using the stream itself as a comms channel back.
Yeah I know, this planet is burning, and all that effort for a RGB wallpaper seems crazy, but 'space stuff' also brings knowledge and hope.
LIGO (the famous gravity wave detector) is made of two 4-kilometer arms. According to its website:
https://www.ligo.caltech.edu/page/facts
> At its most sensitive state, LIGO will be able to detect a change in distance between its mirrors 1/10,000th the width of a proton! This is equivalent to noticing a change in distance to the nearest star (some 4.2 light years away) of the width of a human hair.
So I think two telescopes at 450km distance synchronized to "merely" (haha) a visible light's wavelength should be doable, if we throw a fuckton of money on that.
If you drop the requirement that the image has to be taken with wavelengths our eyes are sensitive to, you could image it using radio telescopes. We already have this capability, the problem though with radio interferometry is that while you can get an effectively huge aperture, the contrast level will be very low, and I am guessing that after subtracting the signal from the star, the signal from the planet will not be above the noise level. Note that optical interferometers would have the same problem.
L2 is moving though right? Or does it need to be simultaneously receiving at the 2 points?
Sadly, it has to be simultaneous.
My (tenuous) understanding of interferometry is that you receive light from two points separated by a baseline and then combine that light in such a way that the wavelengths match up and reinforce at appropriate points.
Wikipedia has a decent summary: https://en.wikipedia.org/wiki/Aperture_synthesis
I just wanna say that this an exemplary comment. This is the kind if thing i read hn coments for.
>In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.
the image on the linked website is more than 1 pixel across: what are you saying? it's false/fake?
The resolution of the image (the ability to resolve two points) is greater than the size of the planet, thus it appears as a point spread function, no detail can be resolved.
Synchronization is solvable, and why stop at two? You could have a three-dimensional array of them, spread over very large distances. We have the technology now to pull this off.
I thought modern telescopes use software to merge images across a period of time / from multiple telescopes to get a significantly higher resolution than that achieved through the physical limitation of light. At least that’s how all the spy telescopes work and how various ground based telescopes collaborate afaik.
That’s in addition to gravitational lensing effects.
Take this even further and it eliminates a whole bunch of possible explanations for the Fermi Paradox.
If, like me, you believe the future of any civilization (including ours) is a Dyson Swarm then you end up with hundreds of millions of orbitals around the Sun between, say, the orbits of Venus and Mars. It's not crowded either. The mean distance between orbitals is ~100,000km.
People often ask why would anyone do this? Easy. Two reasons: land area (per unit mass) and energy. With 10 billion people, that'd be land about the size of Africa each with each person having an energy budget of about the solar output hitting the Earth, a truly incomprehensibly large amount of energy.
So instead of a telescope 450km wide (fia optical interferometry), you have orbitals that are up to ~400 million kilometers apart. The resolution with which you could view very distance worlds is unimaginably high.
Why does this eliminate Fermi Paradox proposed solutions? One idea is that advanced civilizations hide. There is no hiding from a K2 civilization.
Yet another reminder that space is huge and no matter how big we can imagine, due to the realities of physics, there is a good chance that we might never be able to reach the far stars and galaxies.
The depressing, if that's the right word, counterpoint to all the "oh my god it's fun of stars" deep fields crammed with millions of galaxies per square arcsecond is that the expansion of the universe means that nearly all of them are permanently and irrevocably out of reach even with near-lightspeed travel: they'll literally wink out of observable reality before we could ever get to them, leaving only a few nearby galaxies in the sky. At best you can reach the handful of gravitationally-bound galaxies in the local group.
Not that the Milky Way is a small place, but even most sci-fi featuring FTL and all sorts of handwaves has to content itself with shenanigans confined to a single galaxy due to the mindblowing, and accelerating, gaps between galaxies.
It's a shame, but in a glass-falf-full sense the fact that this planet is our little boat in the ocean and all that we got is also a quite helpful focusing reminder and scope constraint.
That the stars are beyond reach might be depressing, how aggresively we are gambling our little boat is on the other hand actively scary and perhaps the dominant limit on humanity's effective reach.
There was an article I saw about how long it would take the fastest spacecraft built with "non-speculative" physics - phenomena that has actually been observed in labs or in nature, ignoring any manufacturing and budget infeasibility (as in no handwaving sci-fi) and we're still talking like an entire lifetime to the next star.
In a way we're kind of still like an ancient village who can only travel by boats made of reeds
Might be Charles Stross’s blog post The High Frontier: http://www.antipope.org/charlie/blog-static/2007/06/the-high...
Biological humans won't reach the stars but our immortal robotic offspring can.
Unlikely. There are both economical and moral reasons to never build a self replicating robotic fleet of probes. I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems.
Just to ring the point home, we are technically (but not yet economically) capable of creating small telescopes which use our sun as a gravitational lens, which would be able to take photographs of exoplanets. In the far future we could potentially build very large telescopes which can do the same and see very distant objects with a fine resolution. That would be a much better investment then to send out self replicating robotic probes.
"There are both economical and moral reasons to never build a self replicating robotic fleet of probes."
Such as?
" I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems."
What part of "immortal" don't you understand? traveling at 1% of c doesn't feel slow if you just turn off or slow down your brain during the trip.
I would expect that the probe makers would want some benefits from the fleet of probes they sent, the only benefit I can think of to be had are information about far away objects, which is of scientific value. The probe’s makers will therefor have to keep contact with an ever expanding fleet of probes and sift through an exponentially increasing amount of information for millions of years. This just does not seem practical when you can just build a telescope. Now time may not pass that slowly from the perspective of the probe, but for the civilization on the homeworld, this method is painfully slow. They could have built thousands or millions of telescopes during that time to gather the same information (albeit of lower quality). Which is why you would probably want to probe your nearest neighboring solar systems, but nothing farther.
As for the moral reasons to not send out a fleet of self replicating probes. These are an extreme pollution hazard. An ever expanding fleet of robots traveling across the galaxy over millions of years, growing in numbers exponentially, exploiting resources in foreign worlds, with nothing to stop them if something happens to their makers. Over millions of years these things would be everywhere, and—in the best case—be a huge nuisance, but at worse they would be a risk to the public safety of the worlds they travel to. With these risks I believe a sufficiently advanced civilization would just build telescopes for their exploration needs.
You don't understand. The "probes" WOULD BE the creators. Biological life is far too fragile to survive interstellar travel but AI running on much more durable hardware makes it downright easy.
And they wouldn't have to be inherently self-replicating.
When you can live millions of years your idea of what is "slow" changes pretty drastically.
Didn’t China able to shoot lasers to the moon orbit for comms?
Anne-Marie Lagrange, lead author of the study
What an appropriate name for an astrophysicist. I wonder if she's distantly related to the namesake of the Lagrange point. https://en.wikipedia.org/wiki/Lagrange_point
Incidentally, although I'd never heard of A-M Lagrange before now, she's had an incredible career: https://en.wikipedia.org/wiki/Anne-Marie_Lagrange
In fact, JWST orbits L2!
https://webbtelescope.org/contents/media/images/01F4STZH25YJ...
> What an appropriate name for an astrophysicist. I wonder if she's distantly related to the namesake of the Lagrange point.
Scopus has 390 profiles of people named Lagrange. It is not a very popular family name but it is not uncommon either and some of them are bound to end up in academia, whether they are descendants of Joseph-Louis or not.
Exactly my thought too, probably nominative determinism striking again
Another way to put that 111 light year distance into perspective, the Voyager space probes are yet to pass 1 light day from earth.
I've been bearish on the JWST in the past. I've thought it an investment in science that could have been made better by waiting a bit for cheaper heavy lift and advances in computational imaging.
However, this is the culmination of the construction of a cathedral to science. Every stone laid one atop another from our first comprehension of the cosmos to our emergence from our long dream as the center of a deity constructed universe has resulted in a discipline that can not only conceive of other spheres we can stand on, to entire other systems of spheres we can now see.
This is magnificent.
The HN title is subtly incorrect: this isn't the first direct image of an exoplanet from JWST. Here's an article from March showing several exoplanet images from JWST: https://science.nasa.gov/missions/webb/nasas-webb-images-you...
The key word "discovery" has been removed from the headline from TFA: "The James Webb Space Telescope Reveals Its First Direct Image Discovery of an Exoplanet". I.e, this is the first time that direct imagery was used to _discover_ a planet we didn't know existed previously.
Ok, we've put discovery in there now. Thanks!
Submitted title was "James Webb Space Telescope reveals its first direct image of an exoplanet", which I'm sure was just a good-faith attempt to fit HN's 80 char title limit. I've achieved that by compressing to JWST now :)
> Although there is a slight possibility that the newly detected infrared source might be a background galaxy
I understand the difficulty in what they are doing, but the scale of the error here is amusing. “We thing we took a picture of something, but it might have been billions of things much bigger but further away”
With time, orbital motion should distinguish the two possibilities.
Though at a 50 AU orbit around a smallish star, that might take a while.
That actually makes one wonder if it will move enough within the lifetime of JWST to actually detect that orbital motion.
That should be calculable.
Orbital mechanics, orbital period, and minimum determinable arc of JWST.
Though another thought is that doppler might also reveal velocity, if a spectrum could be obtained. Since the system is nearly perpendicular to the Solar System (we're viewing it face-on rather than from the side), those shifts will be small.
The JWST is a marvel of engineering. It is also a machine designed around the restrictions of what the most powerful rockets of the 1990's were capable of. Just imagine how capable future telescopes will be now that we have multiple super-heavy launch vehicles with cavernous payload fairings in development.
My fantasy is that at some point we’ll have a sufficiently powerful telescope to cause a galactic “Van Leeuwenhoek moment” where, just like that discoverer of microbes, we will suddenly see the galaxy swarming with spacecraft.
Assume for a moment that happens. Can you possibly imagine the chaos and turmoil that causes on Earth?
No? I genuinely think most of the world will have moved on and will be caring about something else within a day, the world will be about as chaotic and tumultuous as it was shortly after the discovery of microbes.
Microbes weren't discovered everywhere all at once though. I think if the entire planet found out (through modern media) people would go ballistic.
Yes, and too bad a twin or two weren't developed simultaneously, as the additional cost would be minimal - and now we have SpaceX rockets to launch them.
it's hard to commit to building JWST type of payload around a non-yet proven launcher. you'd want to wait until the "in development" becomes proven before planning to launch some decadal planned mission.
Ariane 5 seems pretty proven to me :D
yeah, nothing says proven like being retired
[dead]
Another cool thing is that this technique is biased towards planets far from their star, because it is easier to see a planet the further away from their bright star.
In contrast, current techniques are biased towards close-in planets. Both Doppler-shift and light-curve methods tend to detect close-in planets.
We’ll get a better idea of the distribution of planets with both techniques.
> To further support their observations, Lagrange and her colleagues ran computer models that visualized the potential planetary system. The simulations yielded images that aligned with the ones captured by the telescope. “This was really why we were confident that there was a planet,”
Don’t get me wrong, I love that we are doing this work and have no reason to doubt that this is indeed an exoplanet image, but I view this kind of modelling as a pretty weak form of support for a hypothesis. Models are built from assumptions, which are influenced by expectations. They are not data.
This is super exciting. It seems possible to one day receive higher resolution images of this type of find. Perhaps someone who is more familiar with this subject can opine.
The moment we have our first, direct-observation photo of an earth-like exoplanet will be a defining point in our history.
The Nancy Grace Roman Space Telescope is supposed to have even better coronagraph as a technology demonstrator. They keep finding ways to improve on the technology.
If it's allowed to continue, which seems very shakey at the moment. NASA's would from DOGE will result in projects - even mostly completed one - being trashed.
China is catching up on optics and launch. The torch of civilisation seems unlikely to be lost if we fuck it up that badly.
I’m not sure why this is downvoted. It’s entirely accurate.
https://en.wikipedia.org/wiki/Nancy_Grace_Roman_Space_Telesc...
> In April 2025, the second Trump administration proposed to cut funding for Roman again as part of its FY2026 budget draft. This was part of wider proposed cuts to NASA's science budget, down to US$3.9 billion from its FY2025 budget of US$7.5 billion. On April 25, 2025, the White House Office of Management and Budget announced a plan to cancel dozens of space missions, including the Roman Space Telescope, as part of the cuts.
That will be done with a solar gravitational lens - there's a recent-ish NASA paper about it. Basically you send your probe to > 550 AU in the opposite direction of your target exoplanet, point it at the Sun and you will get a warped high-res photo of the planet around the Sun. You can then algorithmically decode it into a regular photo.
I think the transit time is likely decades and the build time is also a long time as well. But in maybe 40-100 years we could have plentiful HD images of 'nearby' exoplanets. If I'm still around when it happens I will be beyond hyped.
FYI: Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission. https://arxiv.org/abs/2002.11871
this is one of those where a missed alignment is going to be a huge bummer. 550AU * arcseconds is a long way off looking not at what you wanted. you wouldn't know until you were at minimum distance which is going to take generations to achieve. voyager 1 is only ~166AU and that was >40 years. so if you try to nudge your coarse, how many more generations would it be before it was aligned correctly?
an arcsecond at 550AU is "only" 400,125 km. So, in theory, it's correctable in days.
Paper: https://www.nature.com/articles/s41586-025-09150-4
I really liked the image a lot so I emailed the author of the paper to see if she had a version without the clipart,she didn't but said it was fine to remove it, so: https://s.h4x.club/YEuYLW8z (doesn't render tiffs I guess, so hit download)
I was aware that we have directly imaged exoplanets before, but I didn't know just how many we've seen now: https://en.m.wikipedia.org/wiki/List_of_directly_imaged_exop...
Not to take anything away from JWST - every one of these is an incredible achievement!
How would you feel if you were a planet mistaken for a galaxy?
The article starts using JSWT instead of JWST … is anyone here able to effect an edit?
It’s been truly fascinating to is f_p from the Drake equation go from a guess of maybe 0.5 as an upper bound to an increasingly confident 1 in my lifetime.
https://en.wikipedia.org/wiki/CE_Antliae https://www.nature.com/articles/s41586-025-09150-4
So presumably they'll be able to take another photograph in a year or two and the planet will have visibly moved? (Jupiter's orbital period around the Sun is about 12 years, but this planet is about 10 times further from the star and has an estimated orbital period of 550 years.)
Do NOT trust my napkin math, but I believe TWA 7 moves ~0.6 "pixels" (0.02 arcsec) per Earth-year.
The star thing made me think "Who's that planetoid?"
edit: but it's the orange thing not the star
Why is it censored?
They have to block out the light of the star so that it doesn't overwhelm the light from the planet.
Not sure if you're joking, but in case you're not - the star at the center is usually so bright that its light drowns out the light of anything nearby. In such cases, the star is covered so that the dimmer objects nearby are visible.
Did it come from JPL?
Bunch of liberals.. shakes fist
/s
Nice web site: "enable javascript to continue"
Any direct link on the pic?
How is it that we can spot a planet 110 light years away, but whether there’s another planet in the solar system past Pluto is a matter of legitimate scientific debate?
Because exoplanets by definition are going to be found adjacent to stars, which limits the area you need to search. Planets are fairly common, so you don't need to look at that many stars before you find evidence of an exoplanet, provided you have a good-enough telescope.
A hypothetical planet beyond Pluto be in a huge part of the sky: Presumably the orbit of such a planet could be inclined about as much as Pluto's. The 17-degree inclination of Pluto's orbit means it could be in a 34-degree wide strip of the sky, which, if I'm doing my math right, is about 29% of the full sky. If we allow for up to a 30 degree inclination, then that's half the sky.
There's also the matter of object size and brightness. The proposed Planet Nine[1] was supposed to be a few hundred AU away, and around the mass of 4 or 5 Earths. The object discovered in this paper is around 100 M🜨, at around 52 AU from its star. Closer and larger. (Of course, there's a sweet spot for exoplanet discovery, where you want the planet to be close enough to be bright, but far enough away to be outside the glare of the star.)
1. https://en.wikipedia.org/wiki/Planet_Nine
The paradox is explained by different detection methods: exoplanets like this one glow in infrared and are directly visible against the black of space, while Planet Nine would be extremely dim, non-glowing, and lost in the cluttered background of our galaxy's disk.
Because we are looking for much smaller planets.
How cool would it be to directly image artificial light on the "dark side" of a planet (like all the photos you see of lights on earth at night)?
I mean, even if there is life it's like 1 in a gazillion. But you could imagine some ML looking through all of its images to find planets, etc.
Or imagine another civilization looking at our lights with their telescope
And imagine that the only reason, the ONLY reason, they haven’t completely blown us away, is because our planet happens to be one of the very rare planets where the ratio of the size of our moon and earth is in such a way that you can witness a total solar eclipse as a black hole in the sky once a year, and they would like to witness this event someday.
What if FTL is not possible? In that case the attack will take a long time to reach us, and in the meantime we will be much more advanced technologically and could potentially defend ourselves.
In sci-fi we see warp drives, worm hole travel, phasers, photon torpedos and energy shields around ships. But what if none of that is possible? In that case, we might even have the technology to defend ourselves today if we manage to detect the attack in time.
It's a huge risk for a civilization to attack us. Even if they have capabilities that are beyond our technology, there might still be limitations based on the laws of physics. And if they attack us, they risk a response.
That's no reason not to blow us away, eclipses still work if there are no annoying humans around to see them.
[flagged]