The price dropped because China is selling these things by the truckload to both sides in Ukraine. China is benefitting from the economics of scale, Russia and Ukraine are each burning through on the order of 100,000 of these things each month and most components are sourced from China
I read recently that Ukraine was producing a lot of drones and ramping up production too, as well as using fibre-optic controlled drones to overcome electronic warfare/jamming.
Hundreds of small companies (around 10 employees each) in Ukraine are making and assembling drone's right now (according to journalists who filmed them in Ukraine last month). I counted at least two dozen companies in the Netherlands, probably a few hundred all over Europe. Besides the frame that holds the 4 or six rotors together all other parts are off the shelf, mostly ordered from Shenzen. You can CNC mill a carbon fiber or aluminum frame from a single plate on an x-y flatbed cnc.
I researched all this last night because I need $20K-$200k investment right now to start my own startup making unjammable autonomous kamikaze drones with 20K payload on top of 20km optical fiber spools unreeling. I think it is possible to make them lighter without fibers if you put a few thousand microprocessors in with a wafer scale integration for $500.
Of course you could just make cheap drones that kill every warm body in sight[4] indiscriminately. To disable a tank you need these fiber tethered drones with bigger payloads[3][5][6]. You cab see two fiber strands still in the air in a few frames of the video.
What you need is a Paul MacCready style development cycle for drones in the field. You need to be able to make 10 crashes per hour, fix the drone by replacing with off-the shelf components and try again. The programming should be live-coding like Squeak Smalltalk, no recompiling, just fix and run while you are using the software to edit itself [1] as Dan did in 1976[2] on this 5.88 mhz computer. We still code like this today but most programmers and managers ignore this in favor of terminals and simulated punch cards. Remember that it took humans 200.000 years to invent written language or agriculture, we just are not that smart.
[1] https://youtu.be/NqKyHEJe9_w?t=433
[2] https://www.youtube.com/watch?v=uknEhXyZgsg
[3] https://www.youtube.com/watch?v=m3uAFBzzbTc
[4] https://www.youtube.com/watch?v=TlO2gcs1YvM
If you unwind the spool hanging under the drone by a rotating shaft through the spool, you do not have any tension on the fiber (but for a few grams of its weight if you unspool to slowly) and the weight of the fiber in the air even helps unspool the fiber (although you want to prevent that by speeding up the unwinding)
Again not diminishing the work but adjusting the expectations, this output is not from a curious guy in a shed. He is a trained Electrical Engineer working as a Senior Consultant/Scientist in RF design for a Finnish Engineering Consultancy.
This guy clearly has a breadth and depth of knowledge from his professional career which he brings with him into his out-of-office projects.
What's impressive is that he did the RF part, the SAR data reduction part, and the drone control part. Those are different skill sets.
Wonder if anyone else has seen more resources on this? Going to try to digest the linked paper.. https://topex.ucsd.edu/rs/sar_summary.pdf
And in the end it seems you can bypass a lot of the position problems with autofocusing! First time seeing this part too :))
My thesis describes how the math of this all works, in chapters 3 and 4 of the PDF here: https://github.com/stevesimmons/phd-thesis-radar-imaging
Here:
SAR = Synthetic Aperture Radar, where the radar flies along a straight line and the apparent rotation of the ground produces a Doppler shift that can be used to get high cross-range resolution.
ISAR = Inverse Synthetic Aperture Radar, where the radar is still (e.g. on the ground) and the target (e.g. a plane) is flying, and their relative motion produces a rotation of the target, which equally produces a Doppler shift that can be used to get high cross-range resolution.
I find it easiest to understand in the context of a conventional phased array, but that might not help if you don't already know how those work.
With a phased array you're beam forming and sweeping an area of space. Your signal returns are from the beam or side lobes. You can passively beam form on Rx as well.
But with SAR you're not beam forming. You're illuminating everything - the whole ground below you. And you get a return from everywhere all at once. Two equidistant reflectors will return signals simulatenously. If your flight path is between these two points, and the distance is always equal, how can you differentiate them?
You're digitally beam forming on the Rx somehow but I think there is more to it
There are a couple conceptual ways to think about SAR. One is, in fact, as beamforming. Each position of the radar along the synthetic aperture is one element in an enormous array that's the length of the synthetic aperture itself: that's your receive array.
Regarding your question about scatterers that are equidistant along the entire synthetic aperture length: typically, SAR systems don't use isotropic antennas. And they're generally side-looking. So you would see the scatterer to one side of the radar, but not the equidistant scatterer on the other side.
If you had an isotropic antenna that saw to each side of the synthetic aperture, then the resulting image would be a coherent combination of both sides. Relevant search terms would be iso-range and iso-Doppler lines. Scatterers along the same iso-range and iso-Doppler lines over the length of the synthetic aperture are not distinguishable.
As to your question earlier in the chain, my preferred SAR book is Carrara et al. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Given the title, it is of course geared toward spotlight (where you steer the beam to a particular point) rather than strip map or swath (where your beam is pointed at a fixed angle and dragged as you move along). It has decent coverage of the more computationally efficient Fourier-based image formation algorithms but does not really treat algorithms like the back projection that Henrik uses (I also think back projection is easier to grasp conceptually, particularly for those without a lot of background in Fourier transforms). But my book preference might just be because that's what I first learned with.
You're skipping a step -- where does that beam come from? For simplicity lets think about a scene illuminated uniformly (i.e. from a single element) so that we don't get hung up on the transmit beam. I think we agree you could still sweep a receiving phased array beam across that scene. Lets further assume it's digital beamforming, so you're storing a copy of the signal incident _at every element of the array_. Not a 'beam' yet, just a bunch of individual signals.
>> you get a return from everywhere all at once
Yes! Think about each of those elements of the phased array -- they're also receiving signals from everywhere all at once.
It only becomes localized into a beam when you combine all the elements with specific phase weights. That process of combining element returns to form the beam is mathematically identical to what you do in SAR as well -- combine all your individual 'element' (individual snapshot in space) responses with some phase weights to get the return in one direction. Repeat the math in all directions to form one dimension of the image (second dimension is the radar time-of-flight bit, which is unrelated to the beamforming).
Maybe not you specifically, but I think people don't understand the 'synthetic aperture' part. Specifically, that you can ignore the time between snapshots (because the transmitter and receiver are synchronized) and act like all the snapshots the platform took across the line of flight happened simultaneously. What you're left with is the element responses to a big phased array, and you can 'beamform' using those responses.
Every month there’s a new article that improves my suburban missile defense system* and helps guarantee that my neighbors will never let their dogs poop on my lawn ever again.
Terrain mapping my neighborhood so that my rockets can navigate to the right offender has always been a challenge. Now I can just use drones!
* It’s a defense system with missiles, not a defense system against missiles. That would be silly.
Tactical and Strategic Missile Guidance, Seventh Edition https://www.amazon.com/Tactical-Strategic-Missile-Guidance-S...
Tactical missile warheads https://www.amazon.com/Tactical-Warheads-Progress-Astronauti...
Fundamentals of Astrodynamics https://www.amazon.com/Fundamentals-Astrodynamics-Second-Dov...
In any case, I've absolutely noticed how even a Raspberry Pi Zero, taken back 40 years, would be a supercomputer beyond anything that money could buy back then.
A GPU that we'd consider obsolescent today would be truly insane in 1985.
https://www.businessinsider.com/ukraines-drone-makers-produc...
In the former case, you have close to zero for both. In the latter, you're dragging 2km of line behind a small aircraft, risking the line getting snagged by something on the ground.
In most of the write-up, these drones were on pre-planned autonomous routes, and would thus not be affected. Unless you also had anti-radar installations :-)
Or hell, some kinda highly-illegal and mostly-unfeasible microwave gun or multi-watt CO2 laser that you could point at a drone and bring it down.
https://www.lesslethal.com/products/12-gauge/als12skymi-5-de...
No sense sacrificing any hardware. Zero resource attrition and imperceptible evidence of action is the name of the game.
Match that with a spot of sound out of human hearing range and you can clear the enemy of its own accord far beyond your legal and overt action spheres.
If a close neighbor’s unit stays clear but expresses its dismay regarding warning sounds too loudly, well, this works for that too.
We are talking about nuisance bears and moose, right?
Clearly the solution is laser ignition of napalm-- don't want to accidentally catch the drone on fire, so the laser can be timed to fire once the stream is clear of it.
Now we’re talking! Unfortunately I couldn’t figure out how to deploy something as viscous as napalm through my sprinkler system. I’ve been reduced to dropping it from a helicopter, the old school way.
The parabolic distribution is not at all needed in this case, you would just adjust the phase at each unit based on the distribution you have.
It is a thing that's considered, mostly for very low frequencies (1-100MHz) where all the above challenges are vastly simplified, and also large antennas (potentially km) are needed for any kind of directionality.
But at lower frequencies, maybe. Or you can do several passes with the same drone and process later :D
Deploying an antenna that's effectively 50 or 100 m wide by lifting 10-20 drones, after some simple ground preparations, could be invaluable in many scenarios, especially for the military, of course.
Probably in that specific use case yes, but we have had parabolic dishes and slotted waveguide apertures for a long time now...
(And soon, the Chinese drones on attack through European cities will be able to hunt civilians 24/7?)