Minimize Technologies and Applications

Technologies and Applications

This file is intended to present some technology topics that cannot be assigned to a particular mission. The following chapters contain only short descriptions, they are presented in reverse order. The topics should be of interest to the reader community.


  Photonics: From custom-built
to ready-made
Physicist creates fifth state
of matter from their living room
Scientists use light to
accelerate supercurrents, access
forbidden light, quantum properties
Ultra-thin sail could speed
journey to other star systems
Lighting material of the future
Smart chips for space Flexible, ultra-thin solar cell Satellite design applied
to superyacht
Controlling light with light Slow light to speed up
LiDAR sensors development
First plant-powered IoT
sensor sends signal to space
Skin-like sensors - wearable tech Water drop antenna lens Particle accelerator that fits on a chip
ESA helps industry for
5G innovation
Glowing solar cell Quantum light sources
pave the way for optical circuits
Driverless shuttle New Method Can Spot Failing
Infrastructure from Space
Atomic motion captured
in 4-D for the first time
SUN-to-LIQUID Melting satellites The mysterious crystal that melts
at two different temperatures
Mission Control 'Saves Science' Testing satellite marker designs Mirror array for LSS
Cold plasma tested on ISS 3D printing and milling Athena optic bench SmartSat architecture in spacecraft
Radiation tolerance of 2D
meterial-based devices
Better Solar Cells Converting Wi-Fi Signals to Electricity
Neonatal Intensive Care Units Introduction of 5G
communication connectivity
Unique 3D printed sensor technology
New Geodesy Application for Emerging Atom-Optics Technology Wireless transmission at 100 Gbit/s 3D printing one of the strongest materials on Earth
Prototype nuclear battery packs The Kilopower Project of NASA Top Tomatoes - Mars Missions
NEXT-C ion propulsion engine New dimension in design Lasers Probing the nano-scale
    References


Photonics: From custom-built to ready-made

• June 17, 2020: Information technology continues to progress at a rapid pace. However, the growing demands of data centers have pushed electrical input-output systems to their physical limit, which has created a bottleneck. Maintaining this growth will require a shift in how we built computers. The future is optical. 1) 2)

Over the last decade, the field of photonics has provided a solution to the chip-to-chip bandwidth problem in the electronic world by increasing the link distance between servers with higher bandwidth, far less energy, and lower latency compared to electrical interconnects.

Technologies_Auto3E

Figure 1: An international collaboration team of University California, Santa Barbara (UCSB), California Institute of Technology (Caltech) and EPFL has developed an integrated technology that may revolutionize photonic systems (photo credit: Lin Chang)

One element of this revolution, silicon photonics, was advanced fifteen years ago when UC Santa Barbara and Intel demonstrated silicon laser technology. This has since triggered an explosion of this field. Intel is now delivering millions of silicon photonic transceivers for data centers all around the world.

Now, a collaboration between UC Santa Barbara, Caltech, and EPFL have made another revolutionary discovery in the field. The group managed to simplify and condense a complex optical system onto a single silicon photonic chip. The achievement, published in Nature, significantly lowers the cost of production and allows for easy integration with traditional, silicon chip production. 3)

“The entire internet is driven by photonics now,” says John Bowers, who holds the Fred Kavli Chair in Nanotechnology at UC Santa Barbara and directs the campus’s Institute for Energy Efficiency and led the collaborative research effort.

Despite the great success of photonics in the internet’s backbone, there are still challenges. The explosion of data traffic also means growing requirements for the data rates that silicon photonic chip can handle. So far, most efficient way to address this demand is to use multicolor laser lights to transmit information: the more laser colors, the more information can be carried.

But this poses a problem for integrated lasers, which can generate only one color of laser light at a time. “You might literally need fifty or more lasers in that chip for that purpose,” says Bowers. And using fifty lasers is expensive and inefficient in terms of power. Also, noise and heat can cause the frequency of light that each laser produces to fluctuate. Finally, with multiple lasers, the frequencies can even drift into each other, much like early radio stations did.

A solution can be found in the technology of “optical frequency combs”, which are collections of equally spaced frequencies of laser light. Plotting the frequencies reveals spikes and dips that resemble a hair comb — hence the name.

Generating combs used to require bulky and expensive equipment, but this can be now managed using the recently emerged microresonator-based soliton frequency combs, which are miniaturized frequency comb sources built on CMOS photonic chips. Using this “integrated photonics” approach, the collaborating team has developed the smallest comb generator in the world, which essentially resolves all of these issues.

The system is rather simple, consisting of a commercially available feedback laser and a silicon nitride photonic chip. “What we have is a source that generates all these colors out of one laser and one chip,” says Bowers. “That’s what’s significant about this.”

The simple structure means small scale, less power, and lower cost. The entire setup now fits in a package smaller than a match box whose overall price and power consumption are smaller than previous systems.

The new technology is also much more convenient to operate. Previously, generating a stable comb had been a tricky endeavor. Researchers would have to adjust frequency and power just right to produce a coherent soliton comb, and even then, the process was not guaranteed to generate a comb every time. “The new approach makes the process as easy as switching on a room light,” says Kerry Vahala, Professor of Applied Physics and Information Science and Technology at Caltech, where the new soliton generation scheme was discovered.

“What is remarkable about the result is the full photonic integration and reproducibility with which frequency combs can be generated on demand,” adds Tobias J. Kippenberg, Professor of Physics at EPFL who leads the Laboratory and Photonics and Quantum Measurement (LPQM), and whose laboratory first observed microcombs more than a decade ago.

The EPFL team has provided the ultralow-loss silicon nitride photonic chips, which were fabricated in at EPFL Center of MicroNanoTechnology (CMi) and serve as the key component for soliton comb generation. The low-loss silicon nitride photonics technology has been commercialized via the lab startup LIGENTEC.

The “magic” behind all these improvements lies in an interesting physical phenomenon: when the pump laser and resonator are integrated, their interaction forms a highly coupled system that is self-injection-locking and simultaneously generates “solitons” – pulses that circulate indefinitely inside the resonator and give rise to optical frequency combs.

The new technology is expected to have an extensive impact on photonics. In addition to addressing the demands of multicolor light sources in communication-related products, it also opens up a lot of new opportunities in many applications. One example is optical clocks, which provide the most accurate time standard in the world and are used in a number of applications, from navigation to measuring physical constants.

“Optical clocks used to be large, heavy, and expensive,” says Bowers. “There are only a few in the world. With integrated photonics, we can make something that could fit in a wristwatch, and you could afford it.”

“Low-noise integrated optical microcombs will enable a new generation of optical clocks, communications and sensors,” says Gordon Keeler, the project’s manager at the Defense Advanced Research Projects Agency (DARPA). “We should see more compact, more sensitive GPS receivers coming out of this approach.”

All in all, the future looks bright for photonics. “It is the key step to transfer the frequency comb technology from the laboratory to the real world,” says Bowers. “It will change photonics and our daily lives.”

This research was a collaboration of UC Santa Barbara, the California Institute of Technology and the Swiss Federal Institute of Technology Lausanne (EPFL). The project was funded by DARPA’s Direct On-Chip Digital Optical Synthesizer (DODOS) program, which demonstrated optical synthesizers using photonic integrated circuits.




Physicist creates fifth state of matter from their living room

• May 22, 2020: Dr Amruta Gadge from the Quantum Systems and Devices Laboratory successfully created a Bose-Einstein Condensate (BEC) - considered to be the fifth state of matter - using quantum technology based at the University of Sussex lab facilities (UK) despite working remotely from her living room two miles away. 4)

Technologies_Auto3D

Figure 2: Dr Amruta Gadge setting up the lasers prior to lockdown. She created the fifth state of matter working from home using quantum technology. This result is believed to be the first time that BEC has been created remotely in a lab that did not have one before (image credit: University of Sussex)

The research team believe the achievement could provide a blueprint for operating quantum technology in inaccessible environments such as space.

Peter Krüger, Professor of Experimental Physics at the University of Sussex, said: “We believe this may be the first time that someone has established a BEC remotely in a lab that didn’t have one before. We are all extremely excited that we can continue to conduct our experiments remotely during lockdown, and any possible future lockdowns.

"But there are also wider implications beyond our team. Enhancing the capabilities of remote lab control is relevant for research applications aimed at operating quantum technology in inaccessible environments such as space, underground, in a submarine, or in extreme climates.”

A BEC consists of a cloud of hundreds of thousands of rubidium atoms cooled down to nanokelvin temperatures which is more than a billion times colder than freezing.

At this point the atoms take on a different property and behave all together as a single quantum object. This quantum object has special properties which can sense very low magnetic fields.

Professor Krüger said: “We use multiple carefully timed steps of laser and radio wave cooling to prepare rubidium gases at these ultralow temperatures. This requires accurate computer control of laser light, magnets and electric currents in microchips based on vigilant monitoring of environmental conditions in the lab while nobody is able to be there to check in person.”

The Quantum Systems and Devices Group have been working on having a second lab with a BEC running consistently over the past nine months as part of a wider project developing a new type of magnetic microscopy and other quantum sensors.

The research team uses atomic gases as magnetic sensors close to various objects including novel advanced materials, ion channels in cells, and the human brain.

Trapped cold quantum gases are controlled to create extremely accurate and precise sensors that are ideal for detecting and studying new materials, geometries and devices.

The research team are developing their sensors to be applied in many areas including electrical vehicle batteries, touch screens, solar cells and medical advancements such as brain imaging.

Just in time before lockdown, researchers set-up a 2D magnetic optical trap (see Figure 2) and have returned only a couple of times to carry out essential maintenance. The team involved in reaching this goal are Professor Peter Krüger, Dr Fedja Orucevic, Dr Amruta Gadge, Dr Julia Fekete, Scott Sleegers, Shobita Bhumbra, Dr William Evans, Robert Shah and Dr Thomas Barrett.

Dr Gadge, Research Fellow In Quantum Physics And Technologies at the University of Sussex, was able to make the complex calculations then optimizing and running the sequence from her home by accessing the lab computers remotely.

Technologies_Auto3C

Figure 3: The final screen shot confirming the successful creation of the BEC can be seen in this image (image credit: University of Sussex)

She said: “The research team has been observing lockdown and working from home and so we have not been able to access our labs for weeks. But we were determined to keep our research going so we have been exploring new ways of running our experiments remotely. It has been a massive team effort.

"The process has been a lot slower than if I had been in the lab as the experiment is unstable and I’ve had to give 10-15 minutes of cooling time between each run. This is obviously not as efficient and way more laborious to do manually because I’ve not been able to do systematic scans or fix the instability like I could working in the lab.

“We’re hopeful of establishing a skeleton crew back in the labs with social distancing measures in place as soon as it is safe to do so and permitted but we will be able to have many of the team continuing to work from home on a rotational basis thanks to the progress we have made with remote working.“




Scientists use light to accelerate supercurrents, access forbidden light, quantum properties

• May 19, 2020: Scientists are using light waves to accelerate supercurrents and access the unique properties of the quantum world, including forbidden light emissions that one day could be applied to high-speed, quantum computers, communications and other technologies. 5)

Technologies_Auto3B

Figure 4: This illustration shows light wave acceleration of supercurrents, which gives researchers access to a new class of quantum phenomena. That access could chart a path forward for practical quantum computing, sensing and communicating applications (image credit: Jigang Wang)

The scientists have seen unexpected things in supercurrents – electricity that moves through materials without resistance, usually at super cold temperatures – that break symmetry and are supposed to be forbidden by the conventional laws of physics, said Jigang Wang, a professor of physics and astronomy at Iowa State University, a senior scientist at the U.S. Department of Energy’s Ames Laboratory and the leader of the project.

Wang’s lab has pioneered use of light pulses at terahertz frequencies– trillions of pulses per second – to accelerate electron pairs, known as Cooper pairs, within supercurrents. In this case, the researchers tracked light emitted by the accelerated electrons pairs. What they found were “second harmonic light emissions,” or light at twice the frequency of the incoming light used to accelerate electrons.

That, Wang said, is analogous to color shifting from the red spectrum to the deep blue.

“These second harmonic terahertz emissions are supposed to be forbidden in superconductors,” he said. “This is against the conventional wisdom.”

Wang and his collaborators – including Ilias Perakis, professor and chair of physics at the University of Alabama at Birmingham and Chang-beom Eom, the Raymond R. Holton Chair for Engineering and Theodore H. Geballe Professor at the University of Wisconsin-Madison – report their discovery in a research paper just published online by the scientific journal Physical Review Letters. 6)

“The forbidden light gives us access to an exotic class of quantum phenomena – that’s the energy and particles at the small scale of atoms – called forbidden Anderson pseudo-spin precessions,” Perakis said. (The phenomena are named after the late Philip W. Anderson, co-winner of the 1977 Nobel Prize in Physics who conducted theoretical studies of electron movements within disordered materials such as glass that lack a regular structure.)

Wang’s recent studies have been made possible by a tool called quantum terahertz spectroscopy that can visualize and steer electrons. It uses terahertz laser flashes as a control knob to accelerate supercurrents and access new and potentially useful quantum states of matter. The National Science Foundation has supported development of the instrument as well as the current study of forbidden light.

The scientists say access to this and other quantum phenomena could help drive major innovations:

- “Just like today’s gigahertz transistors and 5G wireless routers replaced megahertz vacuum tubes or thermionic valves over half a century ago, scientists are searching for a leap forward in design principles and novel devices in order to achieve quantum computing and communication capabilities,” said Perakis, with Alabama at Birmingham. “Finding ways to control, access and manipulate the special characteristics of the quantum world and connect them to real-world problems is a major scientific push these days. The National Science Foundation has included quantum studies in its ‘10 Big Ideas’ for future research and development critical to our nation.”

- Wang said, “The determination and understanding of symmetry breaking in superconducting states is a new frontier in both fundamental quantum matter discovery and practical quantum information science. Second harmonic generation is a fundamental symmetry probe. This will be useful in the development of future quantum computing strategies and electronics with high speeds and low energy consumption.”

Before they can get there, though, researchers need to do more exploring of the quantum world. And this forbidden second harmonic light emission in superconductors, Wang said, represents “a fundamental discovery of quantum matter.”




Ultra-thin sail could speed journey to other star systems

• May 19, 2020: A tiny sail made of the thinnest material known – one carbon-atom-thick graphene – has passed initial tests designed to show that it could be a viable material to make solar sails for spacecraft. 7)

Light sails are one of the most promising existing space propulsion technologies that could enable us to reach other star systems within many decades.

Traditional spacecraft carry fuel to power their journeys and use complex orbital maneuvers around other planets. But the weight of the fuel makes them difficult to launch and intricate flyby maneuvers considerably lengthen the journey.

Solar sails need no fuel. Spacecraft equipped with them are thus much lighter and easier to launch.

Figure 5: Video of drop tower research into graphene light sails (image credit: Graphene Sail team)

Technologies_Auto3A

Figure 6: Graphene light sail. A tiny sail made of the thinnest material known – one carbon-atom-thick graphene – has passed initial tests designed to show that it could be a viable material to make solar sails for spacecraft. Light sails are one of the most promising existing space propulsion technologies that could enable us to reach other star systems within many decades (image credit: Graphene Sail team)

Traditional spacecraft carry fuel to power their journeys and use complex orbital maneuvers around other planets. But the weight of the fuel makes them difficult to launch and intricate flyby maneuvers considerably lengthen the journey.

Solar sails need no fuel. Spacecraft equipped with them are thus much lighter and easier to launch.

Two spacecraft flown over the past decade have already demonstrated the technology, but they used sails made of polyimide and of mylar, a polyester film.

Graphene is much lighter. To test whether it could be used as a sail, researchers used a scrap just 3 millimeters across.

They dropped it from a 100-m tall tower at ZARM (Zentrum für angewandte Raumfahrt­technologie und Mikro­gravitation) in Bremen, Germany, to test whether it worked under vacuum and in microgravity.

Once the sail was in free-fall – effectively eliminating the effects of gravity – they shone a series of laser lights onto it, to see whether it would act as a solar sail.

Shining a 1 watt laser made the sail accelerate by up to 1 m/s2, similar to the acceleration of an office lift, but for solar sails the acceleration continues as long as sunlight keeps hitting the sails, taking spacecraft to higher and higher speeds.




In search of the lighting material of the future

• May 4, 2020: At the Paul Scherrer Institute (PSI), in Villingen Switzerland, researchers have gained insights into a promising material for OLEDs (Organic Light-Emitting Diodes). The substance enables high light yields and would be inexpensive to produce on a large scale - that means it is practically made for use in large-area room lighting. Researchers have been searching for such materials for a long time. The newly generated understanding will facilitate the rapid and cost-efficient development of new lighting appliances in the future. The study appears in the journal Nature Communications. 8) 9)

Technologies_Auto39

Figure 7: CuPCP gives off an intense green glow not only when current is applied, but also under UV light (image credit: PSI)

The compound is a yellowish solid. If you dissolve it in a liquid or place a thin layer of it on an electrode and then apply an electric current, it gives off an intense green glow. The reason: The molecules absorb the energy supplied to them and gradually emit it again in the form of light. This process is called electroluminescence. Light-emitting diodes are based on this principle.

This green luminescent substance is a hot candidate for producing OLEDs. For about three years now, OLEDs have been found in the displays of smartphones, for example. In the meantime, the first flexible television screens with these materials have also come onto the market.

In addition, OLEDs make cost-efficient room lighting with a large surface area possible. First, however, the materials best suited to this application need to be found. That's because many substances under consideration for OLEDs contain expensive materials such as iridium, and this impedes their application on a large scale and on extensive surfaces. Without such additives, the materials can actually emit only a small part of the energy supplied to them as light; the rest is lost, for example as vibrational energy.

The goal of current research is to find more efficient materials for cheaper and more environmentally friendly displays and large-area lighting. Here, inexpensive and readily available metals such as copper promise progress.

Under close examination

Researchers have now made a more precise examination of the copper-containing compound CuPCP. There are four copper atoms in the middle of each molecule, surrounded by carbon and phosphorus atoms. Copper is a relatively inexpensive metal, and the compound itself can be easily produced in large quantities - ideal preconditions for use over large extensive surfaces.

"We wanted to understand what the excited state of the compound looks like," says Grigory Smolentsev, a physicist in the operando spectroscopy research group. That is: How does the substance change when it absorbs energy? For example, does the structure of the molecule change? How is the charge distributed over the individual atoms after excitation? "This reveals how high the losses of energy that will not be released as light are likely to be," added Smolentsev, "and it shows us how we can possibly minimize these losses."

Using two large research facilities at PSI - the Swiss Light Source (SLS) and the X-ray free-electron laser SwissFEL - as well as the European Synchrotron Radiation Facility in Grenoble, France, Smolentsev and his collaborators took a closer look at the short-lived excited states of the copper compound.

The measurements confirmed that the substance is a good candidate for OLEDs due to its chemical structure. The compound's quantum chemical properties make it possible to achieve a high light yield. One reason for this is that the molecule is relatively stiff, and its 3D structure changes only slightly when excited. Now researchers can start to further optimize this substance for use in OLEDs.

Tools for the future

What's more, the measurements at the three large research facilities at PSI and in Grenoble were significant not only for the investigation of this one copper-containing compound. There was more at stake: The experimental data obtained this way are also helpful in improving theoretical calculations regarding molecules in general.

"So in the future it will be possible to better predict which compounds are more suitable for OLEDs and which less," says Grigory Smolentsev. "The measurement data will help the chemists understand which part of the molecule stands in the way of high efficiency. And of course: how the compound can be improved to increase its light output."

Triplet excited state of organometallic luminophore for OLEDs probed with pump-probe X-ray techniques.




Smart chips for space

• April 30, 2020: Tiny integrated circuits destined for space missions, etched onto a single wafer of silicon, examined under a magnifier. 10)

To save money on the high cost of fabrication, various chips designed by different companies and destined for multiple ESA projects are crammed onto the same silicon wafers, etched into place at specialized semiconductor manufacturing plants or ‘fabs’.

Once manufactured, the chips, still on the wafer, are tested. The wafers are then chopped up. They become ready for use when placed inside protective packages – just like standard terrestrial microprocessors – and undergo final quality tests.

Through little metal pins or balls sticking out of their packages these miniature brains are then connected to other circuit elements – such as sensors, actuators, memory or power systems – used across the satellite.

Considering the time and money needed to develop complex chips like these, ESA’s Microelectronics section maintains a catalogue of chip designs, known as Intellectual Property (IP) cores, available to European industry through ESA licence.

Technologies_Auto38

Figure 8: Technology image of the week. Think of these IP cores as the tiniest mission ‘building blocks’: specialized designs to perform particular tasks in space, laid down within a microchip. These range from single ‘simpler’ functions such as decoding signals from Earth to control the satellite to highly complex computer tasks such as operating a complete spacecraft (image credit: ESA-A Le Floc'h)




Flexible, ultra-thin solar cell

• March 11, 202: ESA has backed the creation of this flexible, ultra-thin solar cell to deliver the best power to mass ratio for space missions. 11)

Technologies_Auto37

Figure 9: Just about 0.02 mm thick – thinner than a human hair – the prototype solar cells were developed by Azur Space Solar Power in Germany and tf2 in the Netherlands; the cell seen here is from tf2. The project was backed through ESA’s Technology Development Element, investigating novel technologies for space (image credit: ESA–SJM Photography)

Possessing up to 32% ‘end of life’ efficiency, the solar cells were produced using a technique called ‘epitaxial lift-off’, meaning they were peeled off the Germanium substrate layer they were initially laid down on, so the costly material can be reused.

Both triple- and quadruple-junction solar cells were manufactured. This means they consist of three or four different layers of material, optimized to make use of different wavelengths of light making up the solar spectrum.

These thinner-than-paper solar cells could be harnessed for future ESA satellites or else high-altitude pseudo satellites (HAPS) – uncrewed aircraft or balloons to perform satellite-like tasks from the upper atmosphere.




Satellite design applied to superyacht

• February 27, 2020: Dutch shipbuilder Royal Huisman applied the same concurrent engineering process developed by ESA for space missions to the design of superyacht Sea Eagle II, due to become the world’s largest aluminum sailing yacht when delivered to its owner this spring. 12)

Sea Eagle II’s modern style extends to its design, which took place using concurrent engineering, taking inspiration from the long-established Concurrent Design Facility (CDF) at ESA’s technical center ESTEC in Noordwijk, the Netherlands, where it is employed for performing preliminary design and assessment of potential future space missions and systems.

“Satellites and superyachts are both complex machines, and concurrent engineering is advantageous in designing any complex system,” explains Massimo Bandecchi, founder of ESA’s CDF. “The basic idea is simple: bring together all necessary experts and design tools into a single room to work together as a team on a shared software model that updates immediately as changes are made, to assess design feasibility and trade-offs in a much more effective and reliable way.”

“While our main focus is fulfilling the needs of ESA engineering, there has also been strong interest in our work from industry. Concurrent engineering’s improved performance in terms of time, cost and efficiency speaks for itself. The result is that more than 50 centers have been built following ESA’s original CDF model and are now in operation across Europe, the majority in the space sector, plus around 10 non-space centers.”

Stefan Coronel, Royal Huisman’s Design and Engineering Manager, received training from Massimo and his team before setting up his own concurrent engineering room: “Yacht building is not rocket science, but it does involve a complex, multi-disciplinary system, with lots of trade-offs to be decided.

“The traditional ‘over the hedge’ design method – where one knowledge field does its work, then throws it across to the next team in sequence – demands the subsequent checking of feedback then possible design adjustments, so is quite a time consuming process. In the modern yard-building world there isn’t so much time to spare.

“That said, compared to the dramatic shortening of satellite conceptual design time achieved by ESA, the main benefit we see from concurrent engineering is not gaining time but that the quality of the final design ends up much better, and more complete – giving us confidence to proceed to the build phase.”

Royal Huisman is now applying concurrent engineering to all of their new builds, and many of their refitting and service projects.

Mr. Coronel adds: “Our room is not as fancy as ESA’s CDF, but has the same basic approach of a place where everyone can contribute, with means of accessing all normal engineering tools and calculation methods, plus a splinter room for small separate discussions.”

In the same way that satellite design is broken down into subsystems, yacht design involves some main disciplines taking part in all the sessions: structural strength and stiffness; deck and sail handling; systems such as propulsion, power, heating and air conditioning; electronics and finally interior design – creating a desirable, luxurious interior. Additional external experts, such as noise and vibration specialists, attend as required.

“The kind of trade-offs that concurrent engineering makes easier to resolve include such deceptively simple tasks as placing a side hatch or staircase,” adds Mr. Coronel. “In the case of a hatch it would need to be watertight and endure loads from sea waves, while also integrated with the living space and looking good when trimmed with wood. While any staircase needs to be open and attractive, while also having pipes and electrical cables run through it, and meeting all relevant fire and safety regulations.”

The company’s adoption of concurrent engineering also meant Sea Eagle II’s aluminum panels have had holes and support structures added to them in advance, saving time in construction and the integration of feature such as winches or hatches.

Technologies_Auto36

Figure 10: This uniquely contemporary 81 m-long three-masted schooner was recently transported by barge from the company’s shipyard in Vollenhove to Royal Huisman Amsterdam, where its carbon composite rig will be installed, leaving her ready for sea trials and on-board crew training (photo credit: Royal Huisman)

European companies and institutions have variously adopted concurrent engineering for educating students, designing automobiles, planning oil platforms and optimizing the production plant of dairy product company FrieslandCampina.




Controlling light with light

• February 5, 2020: Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with researchers at McMaster University and University of Pittsburgh, have developed a new platform for all-optical computing, meaning computations done solely with beams of light. 13)

Technologies_Auto35

Figure 11: SEAS researchers have developed a new platform for all-optical computing, meaning computations done solely with beams of light (image credit: Harvard/SEAS)

“Most computation right now uses hard materials such as metal wires, semiconductors and photodiodes to couple electronics to light,” said Amos Meeks, a graduate student at SEAS and co-first author of the research. “The idea behind all-optical computing is to remove those rigid components and control light with light. Imagine, for example, an entirely soft, circuitry-free robot driven by light from the sun.”

These platforms rely on so-called non-linear materials that change their refractive index in response to the intensity of light. When light is shone through these materials, the refractive index in the path of the beam increases, generating its own, light-made waveguide. Currently, most non-linear materials require high-powered lasers or are permanently changed by the transmission of light.

Here, researchers developed a fundamentally new material that uses reversible swelling and contracting in a hydrogel under low laser power to change the refractive index.

The hydrogel is composed of a polymer network that is swollen with water, like a sponge, and a small number of light-responsive molecules known as spiropyran (which is similar to the molecule used to tint transition lenses). When light is shone through the gel, the area under the light contracts a small amount, concentrating the polymer and changing the refractive index. When the light is turned off, the gel returns to its original state.

When multiple beams are shone through the material, they interact and affect each other, even at large distances. Beam A could inhibit Beam B, Beam B could inhibit Beam A, both could cancel each other out or both could go through — creating an optical logic gate.

“Though they are separated, the beams still see each other and change as a result,” said Kalaichelvi Saravanamuttu, an associate professor of Chemistry and Chemical Biology at McMaster and co-senior author of the study. “We can imagine, in the long term, designing computing operations using this intelligent responsiveness.”

"Not only can we design photoresponsive materials that reversibly switch their optical, chemical and physical properties in the presence of light, but we can use those changes to create channels of light, or self-trapped beams, that can guide and manipulate light,” said co-author Derek Morim, a graduate student in Saravanamuttu’s lab.

“Materials science is changing,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science at SEAS and co-senior author of the study. “Self-regulated, adaptive materials capable of optimizing their own properties in response to environment replace static, energy-inefficient, externally regulated analogs. Our reversibly responsive material that controls light at exceptionally small intensities is yet another demonstration of this promising technological revolution.”

This research was published in the Proceedings of the National Academy of Sciences. It was co-authored by Ankita Shastri, Andy Tran, Anna V. Shneidman, Victor V. Yashin, Fariha Mahmood, Anna C. Balazs. It was supported in part by the US Army Research Office under Award W911NF-17-1-0351 and by the Natural Sciences and Engineering Research Council, Canadian Foundation for Innovation. 14)




Slow light to speed up LiDAR sensors development

• January 21, 2020: Quicker is not always better, especially when it comes to a 3D sensor in advanced technology. With applications in autonomous vehicles, robots and drones, security systems and more, researchers are striving for a 3D sensor that is compact and easy to use. 15)

A team from Yokohama National University in Japan believes they have developed a method to obtain such a sensor by taking advantage of slow light, an unexpected move in a field where speed is often valued above other variables.

Technologies_Auto34

Figure 12: A small-sized silicon photonics chip that can be used for non-mechanical beam steering and scanning (image credit: Yokohama National University)

They published their results on 14 January 20 in the Optica, a journal published by The Optical Society. 16)

LiDAR (Light Detection and Ranging) sensors can map the distance between distant objects and more using laser light. In modern LiDAR sensors, many of the systems are composed of a laser source; a photodetector, which converts light into current; and an optical beam steering device, which directs the light into the proper location.

"Currently existing optical beam steering devices all use some kind of mechanics, such as rotary mirrors," said Toshihiko Baba, paper author and professor in the Department of Electrical and Computer Engineering at Yokohama National University. "This makes the device large and heavy, with limited overall speed and a high cost. It all becomes unstable, particularly in mobile devices, hampering the wide range of applications."

In recent years, according to Baba, more engineers have turned toward optical phased arrays, which direct the optical beam without mechanical parts. But, Baba warned, such an approach can become complicated due to the sheer number of optical antennae required, as well as the time and precision needed to calibrate each piece.

"In our study, we employed another approach - what we call 'slow light,'" Baba said. Baba and his team used a special waveguide "photonic crystal," aimed through a silicon-etched medium. Light is slowed down and emitted to the free space when forced to interact with the photonic crystal. The researchers engaged a prism lens to then direct the beam in the desired direction.

"The non-mechanical steering is thought to be crucial for LiDAR sensors," Baba said. The resulting method and device are small-sized, free of moving mechanics, setting the stage for a solid-state LiDAR. Such a device is considered smaller, cheaper to make and more resilient, especially in mobile applications such as autonomous vehicles.

Next, Baba and his team plan to more fully demonstrate the potential of a solid-state LiDAR, as well as work on improving its performance with the ultimate goal of commercializing the device.




First plant-powered IoT sensor sends signal to space

• 14 January 2020: The first-ever plant-powered sensor has successfully transmitted to a satellite in space. The pilot service, using plants as the energy source, has been developed by the Dutch company Plant-e and Lacuna Space,which is based in the Netherlands and the UK, under ESA’s ARTES (Advanced Research in Telecommunications Systems) program. Because the sensor doesn’t need batteries, due to the internal storage in the system, it’ll reduce cost, maintenance requirements and environmental impact. As long as plants continue to grow, electricity will be produced. 17) 18)

Technologies_Auto33

Figure 13: Plant-powered sensors wetlands research site (image credit: Plant-e BV)

Such sensors could be used to connect everyday objects in remote locations, enabling them to send and receive data as part of the IoT (Internet of Things).

The device can inform farmers about the conditions of their crops to help increase yield, and enable retailers to gain detailed information about potential harvests.

It transmits data on air humidity, soil moisture and temperature, enabling field-by-field reporting from agricultural land, rice fields or other aquatic environments. The extremely low power device sends signals at radio frequencies that are picked up by satellites in LEO (Low Earth Orbit).

Plants produce organic matter through photosynthesis, but only part of this matter is used for plant growth. The rest is excreted into the soil through the plant’s roots. In the soil, bacteria around the roots break down this organic matter, releasing electrons as a waste product. The technology developed by Plant-e harvests these electrons to power small electrical devices.

The IoT prototype device, developed by the two companies, uses the electricity generated by living plants to transmit LoRa® [Long Range, LoRa is a low-power wide area network (LPWAN)] messages about air humidity, soil moisture, temperature, cell voltage and electrode potential straight to Lacuna's satellite.

Plant-e, a start-up from Wageningen, the Netherlands, has developed a technology to harvest electrical energy from living plants and bacteria to generate carbon-negative electricity. The output generates enough energy to power LEDs and sensors in small-scale products. 19)

“This collaboration shows how effective plant-electricity already is at its current state of development,” said Plant-e CEO Marjolein Helder. “We hope this inspires others to consider plant-electricity as a serious option for powering sensors.”

Lacuna, based in the UK and the Netherlands, is launching a LEO (Low Earth Orbit) satellite system that will provide a global Internet-of-Things service. The service allows collecting data from sensors even in remote areas with little or no connectivity. At the moment Lacuna Space is offering a pilot service with one satellite in orbit, and three more satellites are awaiting launch during the next few months.

“This opens up a new era in sustainable satellite communications,” says Rob Spurrett, chief executive and co-founder of Lacuna Space. “There are many regions in the world that are difficult to reach, which makes regular maintenance expensive and the use of solar power impossible. Through this technology, we can help people, communities and companies in those regions to improve their lives and businesses.”

Technologies_Auto32

Figure 14: Plant-powered sensor schematic (image credit: Plant-e BV)

Frank Zeppenfeldt, who works on future satellite communication systems at ESA, says: “We are very enthusiastic about this demonstration that combines biotechnology and space technology. It will help to collect small data points in agricultural, logistic, maritime and transportation applications—where terrestrial connectivity is not always available.”




Skin-like sensors bring a human touch to wearable tech

• 13 January 2020: University of Toronto Engineering researchers have developed a super-stretchy, transparent and self-powering sensor that records the complex sensations of human skin. 20)

- Dubbed AISkin (Artificial Ionic Skin), the researchers believe the innovative properties of AISkin could lead to future advancements in wearable electronics, personal health care and robotics.

Technologies_Auto31

Figure 15: Super stretchy, transparent and self-powering, researchers Xinyu Liu (MIE) and Binbin Ying (MIE, pictured) believe their AISkin will lead to meaningful advancements in wearable electronics, personal health care, and robotics (image credit: Daria Perevezentsev)

- “Since it’s hydrogel, it’s inexpensive and biocompatible — you can put it on the skin without any toxic effects. It’s also very adhesive, and it doesn’t fall off, so there are so many avenues for this material,” Professor Xinyu Liu (MIE), whose lab is focused on the emerging areas of ionic skin and soft robotics.

- The adhesive AISkin is made of two oppositely charged sheets of stretchable substances known as hydrogels. By overlaying negative and positive ions, the researchers create what they call a “sensing junction” on the gel’s surface.

- When the AISkin is subjected to strain, humidity or changes in temperature, it generates controlled ion movements across the sensing junction, which can be measured as electrical signals such as voltage or current.

- “If you look at human skin, how we sense heat or pressure, our neural cells transmit information through ions — it’s really not so different from our artificial skin,” says Liu.

- AISkin is also uniquely tough and stretchable. “Our human skin can stretch about 50 per cent, but our AISkin can stretch up to 400 per cent of its length without breaking,” says Binbin Ying (MIE), a visiting PhD candidate from McGill University who’s leading the project in Liu’s lab. The researchers recently published their findings in Materials Horizons. 21)

Figure 16: Human skin can stretch about 50%, but our AISkin can stretch up to 400% of its length without breaking (image credit: Daria Perevezentsev)

- The new AISkin could open doors to skin-like Fitbits that measure multiple body parameters, or an adhesive touchpad you can stick onto the surface of your hand, adds Liu. “It could work for athletes looking to measure the rigor of their training, or it could be a wearable touchpad to play games.”

- It could also measure the progress of muscle rehabilitation. “If you were to put this material on a glove of a patient rehabilitating their hand for example, the health care workers would be able to monitor their finger-bending movements,” says Liu.

Figure 17: Binbin Ying demonstrates how AISkin could be used to measure the progress of muscle rehabilitation (image credit: Binbin Ying)

- Another application is in soft robotics — flexible bots made completely out of polymers. An example is soft robotic grippers used in factories to handle delicate objects such as light bulbs or food.

- The researchers envision AISkin being integrated onto soft robots to measure data, whether it’s the temperature of food or the pressure necessary to handle brittle objects.

- Over the next year, Liu’s lab will be focused on further enhancing their AISkin, aiming to shrink the size of AISkin sensors through microfabrication. They’ll also add bio-sensing capabilities to the material, allowing it to measure biomolecules in body fluids such as sweat.

- “If we further advance this research, this could be something we put on like a ‘smart bandage,’” says Liu. “Wound healing requires breathability, moisture balance – ionic skin feels like the natural next step.”




Water drop antenna lens

• 08 January 2020: This novel ‘water drop’ antenna lens design for directing radio wave signals was developed by a pair of antenna engineers from ESA and Sweden’s Royal Institute of Technology, KTH. 22)

Technologies_Auto30

Figure 18: The inventors of this new lens design, which received an ESA Technical Improvement award in February 2017, like to call it the ‘water drop’ lens because its shape resembles the ripples produced by a water drop at the surface of a fluid (image credit: ESA–SJM Photography)

In the same way that optical lenses focus light, waveguide lenses serve to direct electromagnetic radio wave energy in a given direction – for instance to send out a radar or a communication signal – and minimize energy loss in the process.

Traditional waveguide lenses have complex electrically-sensitive ‘dielectric’ material to restrict electromagnetic signals as desired, but this water drop waveguide lens – once its top plate has been added on – comes down purely to its curved shape directing signals through it.

The lack of dielectrics in this shape-based design is an advantage, especially for space – where they would risk giving off unwanted fumes in orbital vacuum.

The lack of dielectrics in this shape-based design is an advantage, especially for space – where they would risk giving off unwanted fumes in orbital vacuum.

“The lens’s extremely simple structure should make it easy and cheap to manufacture, opening up avenues to a wide variety of potential materials such as metalized plastics,” explains ESA antenna engineer Nelson Fonseca.

“This prototype has been designed for the 30 GHz microwave range but the simplicity of its shape-based design also means it should be applicable to a broad frequency range – the higher the frequency, the smaller the structure, facilitating its integration”.

The idea came out of a brainstorming session during a conference, explains KTH antenna engineer Oscar Quevedo-Teruel: “We took the ‘Rinehart-Luneburg lens’, also called the geodesic lens, as our starting point. This is a cylindrical waveguide lens developed in the late 1940s, mostly for radar applications.

“We wanted the same performance, while reducing its size and height. So the idea we had was to retain the functional curvature of the original design by folding it in on itself, reducing its profile by a factor of four in the specific case of the manufactured prototype.”

This first prototype of a water drop lens was tested at KTH facilities, Oscar adds, to measure its radiation patterns, efficiency and gain: “While a conventional Luneburg lens might suffer from elevated dielectric losses, especially when used at higher frequencies, this design shows marginal signal loss thanks to its fully metallic design.”

Besides space applications, such as Earth observation and satellite communications on small satellites, this antenna has also attracted the attention of non-space companies. The Ericsson company is looking into using the compact design for the fifth generation mobile phone networks. The concept could also be used for guidance radars in the next generation of self-driving cars.




Researchers build a particle accelerator that fits on a chip

• 02 January 2020: On a hillside above Stanford University, the SLAC National Accelerator Laboratory operates a scientific instrument nearly 2 miles long. In this giant accelerator, a stream of electrons flows through a vacuum pipe, as bursts of microwave radiation nudge the particles ever-faster forward until their velocity approaches the speed of light, creating a powerful beam that scientists from around the world use to probe the atomic and molecular structures of inorganic and biological materials. 23) 24)

Now, for the first time, scientists have created a silicon chip that can accelerate electrons — albeit at a fraction of the velocity of the most massive accelerators — using an infrared laser to deliver, in less than a hair's width, the sort of energy boost that takes microwaves many feet.

Technologies_Auto2F

Figure 19: This image, magnified 25,000 times, shows a section of a prototype accelerator-on-a-chip. The segment shown here are one-tenth the width of a human hair. The oddly shaped gray structures are nanometer-sized features carved in to silicon that focus bursts of infrared laser light, shown in yellow and purple, on a flow of electrons through the center channel. As the electrons travel from left to right, the light focused in the channel is carefully synchronized with passing particles to move them forward at greater and greater velocities. By packing 1,000 of these acceleration channels onto an inch-sized chip, Stanford researchers hope to create an electron beam that moves at 94 percent of the speed of light, and to use this energized particle flow for research and medical applications (image credit: Neil Sapra)

Writing in the Jan. 3 issue of Science, a team led by electrical engineer Jelena Vuckovic explained how they carved a nanoscale channel out of silicon, sealed it in a vacuum and sent electrons through this cavity while pulses of infrared light—to which silicon is as transparent as glass is to visible light—were transmitted by the channel walls to speed the electrons along. 25)

The accelerator-on-a-chip demonstrated in Science is just a prototype, but Vuckovic said its design and fabrication techniques can be scaled up to deliver particle beams accelerated enough to perform cutting-edge experiments in chemistry, materials science and biological discovery that don't require the power of a massive accelerator.

“The largest accelerators are like powerful telescopes. There are only a few in the world and scientists must come to places like SLAC to use them,” Vuckovic said. “We want to miniaturize accelerator technology in a way that makes it a more accessible research tool.”

Team members liken their approach to the way that computing evolved from the mainframe to the smaller but still useful PC. Accelerator-on-a-chip technology could also lead to new cancer radiation therapies, said physicist Robert Byer, a co-author of the Science paper. Again, it’s a matter of size. Today, medical X-ray machines fill a room and deliver a beam of radiation that’s tough to focus on tumors, requiring patients to wear lead shields to minimize collateral damage.

“In this paper we begin to show how it might be possible to deliver electron beam radiation directly to a tumor, leaving healthy tissue unaffected,” said Byer, who leads the ACHIP (Accelerator on a Chip International Program), a broader effort of which this current research is a part.

Inverse design

In their paper, Vuckovic and graduate student Neil Sapra, the first author, explain how the team built a chip that fires pulses of infrared light through silicon to hit electrons at just the right moment, and just the right angle, to move them forward just a bit faster than before.

To accomplish this, they turned the design process upside down. In a traditional accelerator, like the one at SLAC, engineers generally draft a basic design, then run simulations to physically arrange the microwave bursts to deliver the greatest possible acceleration. But microwaves measure 4 inches from peak to trough, while infrared light has a wavelength one-tenth the width of a human hair. That difference explains why infrared light can accelerate electrons in such short distances compared to microwaves. But this also means that the chip's physical features must be 100,000 times smaller than the copper structures in a traditional accelerator. This demands a new approach to engineering based on silicon integrated photonics and lithography.

Vuckovic's team solved the problem using inverse design algorithms that her lab has developed. These algorithms allowed the researchers to work backward, by specifying how much light energy they wanted the chip to deliver, and tasking the software with suggesting how to build the right nanoscale structures required to bring the photons into proper contact with the flow of electrons.

Vuckovic's team solved the problem using inverse design algorithms that her lab has developed. These algorithms allowed the researchers to work backward, by specifying how much light energy they wanted the chip to deliver, and tasking the software with suggesting how to build the right nanoscale structures required to bring the photons into proper contact with the flow of electrons.

The design algorithm came up with a chip layout that seems almost otherworldly. Imagine nanoscale mesas, separated by a channel, etched out of silicon. Electrons flowing through the channel run a gantlet of silicon wires, poking through the canyon wall at strategic locations. Each time the laser pulses—which it does 100,000 times a second—a burst of photons hits a bunch of electrons, accelerating them forward. All of this occurs in less than a hair's width, on the surface of a vacuum-sealed silicon chip, made by team members at Stanford.

The researchers want to accelerate electrons to 94 percent of the speed of light, or 1 million electron volts (1MeV), to create a particle flow powerful enough for research or medical purposes. This prototype chip provides only a single stage of acceleration, and the electron flow would have to pass through around 1,000 of these stages to achieve 1MeV. But that's not as daunting at it may seem, said Vuckovic, because this prototype accelerator-on-a-chip is a fully integrated circuit. That means all of the critical functions needed to create acceleration are built right into the chip, and increasing its capabilities should be reasonably straightforward.

The researchers plan to pack a thousand stages of acceleration into roughly an inch of chip space by the end of 2020 to reach their 1MeV target. Although that would be an important milestone, such a device would still pale in power alongside the capabilities of the SLAC research accelerator, which can generate energy levels 30,000 times greater than 1MeV. But Byer believes that, just as transistors eventually replaced vacuum tubes in electronics, light-based devices will one day challenge the capabilities of microwave-driven accelerators.

Meanwhile, in anticipation of developing a 1MeV accelerator on a chip, electrical engineer Olav Solgaard, a co-author on the paper, has already begun work on a possible cancer-fighting application. Today, highly energized electrons aren't used for radiation therapy because they would burn the skin. Solgaard is working on a way to channel high-energy electrons from a chip-sized accelerator through a catheter-like vacuum tube that could be inserted below the skin, right alongside a tumor, using the particle beam to administer radiation therapy surgically.

"We can derive medical benefits from the miniaturization of accelerator technology in addition to the research applications," Solgaard said.


Some background on SLAC : SLAC National Accelerator Laboratory operates in Menlo Park, California, and is a United States Department of Energy Laboratory, under the programmatic direction of the U.S. Department of Energy Office of Science. Originally named SLAC (Stanford Linear Accelerator Center), now referred to as ”National Accelerator Laboratory,” SLAC was founded in 1962 just west of the university's campus, covering 426 acres. The SLAC research program centers on experimental and theoretical physics researching elementary particle physics using electron beams and a broad program of research in atomic and solid-state physics. In March 2009 it was announced that the SLAC National Accelerator Laboratory was to Receive $68.3 Million in Recovery Act Funding to be disbursed by Department of Energy's Office of Science. As of 2005, SLAC employs over 1,000 people, some 150 of whom are physicists with doctorate degrees. SLAC also serves over 3,000 visiting researchers yearly, operating particle accelerators for high-energy physics, as well as the Stanford Synchrotron Radiation Laboratory (SSRL) for synchrotron light radiation research, which aided in the research of Stanford Professor Roger D. Kornberg as he won a Nobel Prize in Chemistry in 2006. 26)

Technologies_Auto2E

Figure 20: Aerial photo showing the 2 mile length of SLAC, as it is the largest linear accelerator in the world (image credit: Stanford University)




ESA helps industry for 5G innovation

25 September 2019: Connecting people and machines to everything, everywhere and at all times through 5G networks promises to transform society. People will be able to access information and services developed to meet their immediate needs but, for this to happen seamlessly, satellite networks are needed alongside terrestrial ones. 27)

Figure 21: Space's part in the 5G revolution. Everybody is talking about 5G, the new generation of wireless communication. We are at the start of a revolution in connectivity for everything, everywhere, at all times. Space plays at important roll in this revolution. We need satellites to ensure businesses and citizens can benefit smoothly from 5G (video credit: ESA)

The European Space Agency is working with companies keen to develop and use space-enabled seamless 5G connectivity to develop ubiquitous services. At the UK Space Conference, held from 24 to 26 September in Newport, South Wales, UK, ESA is showcasing its work with several British-based companies, supported by the UK Space Agency.

The companies are working on applications that range from autonomous ships to connected cars and drone delivery, from cargo logistics to emergency services, from media and broadcast to financial services.

Spire is a satellite-powered data company that provides predictive analysis for global maritime, aviation and weather forecasting. It uses automatic identification systems aboard ships to track their whereabouts on the oceans.

Spire’s network of 80 nanosatellites picks up the identity, position, course and speed of each vessel. Thanks to intelligent machine-learning algorithms, it can predict vessel locations and the ship’s estimated time of arrival at port, enabling port authorities to manage busy docks and market traders to price the goods carried aboard.

Peter Platzer, chief executive of Spire, said: “ESA recognized the value of smaller, more nimble satellites and was looking for a provider that could bring satellites more rapidly and cheaper to orbit. That really was the start of our collaboration. ESA was instrumental in the fact that Spire’s largest office today is in the UK and most of its workforce is in Europe.”

Integrating the ubiquity and unprecedented performance of satellites with terrestrial 5G networks is fundamental to the future success of 'Project Darwin', a project to develop connected cars in a partnership between ESA, Telefonica 02, a satellite operator, the universities of Oxford and Glasgow and several UK-based start-up companies.

Connected cars need to switch seamlessly between terrestrial and satellite networks, so that people and goods can move across the country without any glitches.

Darwin relies on a terminal that will allow seamless switching between the networks.

Daniela Petrovic of Telefonica O2, who founded Darwin, said: “There is a really nice ecosystem of players delivering innovation. ESA provided the opportunities to start discussions with satellite operators and helped us create this partnership.

“There is a good body of knowledge within ESA on innovation and science hubs and this gave us the opportunity to see what other start-ups are doing. Through ESA, we are getting exposure to 22 member state countries which can see the opportunity and maybe get involved.”

Magali Vaissiere, Director of Telecommunications and Integrated Applications at ESA, said: “We are very excited to see the response of industry to our Space for 5G initiative, which aims to bring together the cellular and satellite telecommunications world and provide the connectivity fabric to enable the digital transformation of industry and society.

“The showcase of flagship 5G projects today confirms the strategic importance of our Space for 5G initiative, which will be a significant strategic part of the upcoming ESA Conference of Ministers to be held in November.”

Other companies that formed part of the showcase include: Cranfield University, which as part of its Digital Aviation Research and Technology Centre is set to spearhead the UK’s research into digital aviation technology; HiSky, a satellite virtual network operator that offers global low-cost voice, data and internet of things communications using existing telecommunications satellites; Inmarsat, a global satellite operator that is showcasing a range of new maritime services enabled by the seamless integration of 5G cellular and satellite connectivity; Open Cosmos, a small satellite manufacturer based at Harwell in Oxfordshire, which is investigating how to deliver 5G by satellite; and Sky and Space Global based in London that plans a constellation of 200 nanosatellites in equatorial low Earth orbit for narrowband communications.




Glowing solar cell

25 September 2019: A solar cell is being turned into a light source by running electric current through it. Such ‘luminescence’ testing is performed routinely in ESA’s Solar Generator Laboratory, employed to detect cell defects – such as the cracks highlighted here. 28)

By happy accident the solar (or ‘photovoltaic’) cell was invented in 1954, just before the start of the Space Age, allowing satellites to run off the abundant sunshine found in Earth orbit and beyond.

Technologies_Auto2D

Figure 22: Made from the same kind of semiconductor materials as computer circuits, solar cells are designed so that incoming sunlight generates an electric current. But the process can be reversed for test purposes: apply an electric charge and a solar cell will glow (image credit: ESA–SJM Photography)

Solar cells, carefully assembled together into arrays, are an essential part of space missions, together with specially-designed batteries for times when a satellite needs more power, passes into darkness or faces a power emergency – plus the power conditioning and distribution electronics keeping all parts of a mission supplied with the power they require.

“Space power technologies are second only to launchers in ensuring European competitiveness and non-dependence,” comments Véronique Ferlet-Cavrois, Head of ESA’s Power Systems, EMC & Space Environment Division.

“Without the research and development ESA performs with European industry to ensure the continued availability of high-performance space power systems and components we would be left utterly reliant on foreign suppliers, or missions wouldn’t fly at all. We will be taking a look back at the important work done during the last three decades during this month’s European Space Power Conference.”

The 12th European Space Power Conference (ESPC) is taking take place in Juan-les-Pins, Côte d'Azur, France, from 30 September to 4 October, with almost 400 participants. Véronique is chairing the event.

“It will begin 30 years to the week from the very first conference in the series,” adds ESA power conditioning engineer Mariel Triggianese, ESPC’s technical coordinator.

“So we’ll be commemorating our past but also looking forward. Our theme is ‘Space Power, Achievements and Challenges’. The chief technology officers from Airbus, Thales, Ariane Group and OHB will be joined by ESA’s Director of Technology Engineering and Quality, Franco Ongaro, to discuss the space power needs of their markets into the future.”




Quantum light sources pave the way for optical circuits

05 August 2019: An international team headed up by Alexander Holleitner and Jonathan Finley, physicists at the Technical University of Munich (TUM), has succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies, from quantum sensors and transistors in smartphones through to new encryption technologies for data transmission. 29) 30)

Previous circuits on chips rely on electrons as the information carriers. In the future, photons which transmit information at the speed of light will be able to take on this task in optical circuits. Quantum light sources, which are then connected with quantum fiber optic cables and detectors are needed as basic building blocks for such new chips.

Technologies_Auto2C

Figure 23: By bombarding thin molybdenum sulfide layers with helium ions, physicists at TUM succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies (image credit: TUM)

First step towards optical quantum computers: "This constitutes a first key step towards optical quantum computers," says Julian Klein, lead author of the study. "Because for future applications the light sources must be coupled with photon circuits, waveguides for example, in order to make light-based quantum calculations possible."

The critical point here is the exact and precisely controllable placement of the light sources. It is possible to create quantum light sources in conventional three-dimensional materials such as diamond or silicon, but they cannot be precisely placed in these materials.

Deterministic defects: The physicists then used a layer of the semiconductor molybdenum disulfide (MoS2) as the starting material, just three atoms thick. They irradiated this with a helium ion beam which they focused on a surface area of less than one nanometer.

In order to generate optically active defects, the desired quantum light sources, molybdenum or sulfur atoms are precisely hammered out of the layer. The imperfections are traps for so-called excitons, electron-hole pairs, which then emit the desired photons.

Technically, the new helium ion microscope at the Walter Schottky Institute's Center for Nanotechnology and Nanomaterials, which can be used to irradiate such material with an unparalleled lateral resolution, was of central importance for this.

On the road to new light sources: Together with theorists at TUM, the Max Planck Society, and the University of Bremen, the team developed a model which also describes the energy states observed at the imperfections in theory.

In the future, the researchers also want to create more complex light source patterns, in lateral two-dimensional lattice structures for example, in order to thus also research multi-exciton phenomena or exotic material properties.

This is the experimental gateway to a world which has long only been described in theory within the context of the so-called Bose-Hubbard model which seeks to account for complex processes in solids.

Quantum sensors, transistors and secure encryption: And there may be progress not only in theory, but also with regard to possible technological developments. Since the light sources always have the same underlying defect in the material, they are theoretically indistinguishable. This allows for applications which are based on the quantum-mechanical principle of entanglement.

"It is possible to integrate our quantum light sources very elegantly into photon circuits," says Klein. "Owing to the high sensitivity, for example, it is possible to build quantum sensors for smartphones and develop extremely secure encryption technologies for data transmission."




Driverless shuttle

10 July 2019: ESA’s technical heart will be serving as a testbed for this driverless shuttle in the coming months. 31)

The Agency’s ESTEC establishment in Noordwijk, the Netherlands, is working with vehicle owner Dutch Automated Mobility, provincial and municipal governments and the bus company Arriva to assess its viability as a ‘last mile’ solution for public transport.

The fully autonomous vehicle calculates its position using a fusion of satellite navigation, lidar ‘laser radar’, visible cameras and motion sensors. Once it enters service in October it will be used to transport employees from one side of the ESTEC complex to the other.

The fully-electric, zero-emission shuttle will respect the on-site speed limit of 15 km/h, and for its first six months of service will carry a steward to observe its operation along its preprogrammed 10-minute-long roundtrip.

Technologies_Auto2B

Figure 24: This driverless shuttle will soon be tested at ESA/ESTEC in the Netherlands (image credit: ESA, B. Smith)




New Method Can Spot Failing Infrastructure from Space

09 July 2019: We rely on bridges to connect us to other places, and we trust that they're safe. While many governments invest heavily in inspection and maintenance programs, the number of bridges that are coming to the end of their design lives or that have significant structural damage can outpace the resources available to repair them. But infrastructure managers may soon have a new way to identify the structures most at risk of failure. 32)

Technologies_Auto2A

Figure 25: A satellite view of the Morandi Bridge in Genoa, Italy, prior to its August 2018 collapse. The numbers identify key bridge components. Numbers 4 through 8 correspond to the bridge's V-shaped piers (from West to East). Numbers 9 through 11 correspond to three independent balance systems on the bridge. In the annotated version, the black arrows identify areas of change based on data from the Cosmo-SkyMed satellite constellation (image credit: NASA/JPL-Caltech/Google)

Scientists, led by Pietro Milillo of NASA's Jet Propulsion Laboratory in Pasadena, California, have developed a new technique for analyzing satellite data that can reveal subtle structural changes that may indicate a bridge is deteriorating - changes so subtle that they are not visible to the naked eye.

In August 2018, the Morandi Bridge, near Genoa, Italy, collapsed, killing dozens of people. A team of scientists from NASA, the University of Bath in England and the Italian Space Agency used synthetic aperture radar (SAR) measurements from several different satellites and reference points to map relative displacement - or structural changes to the bridge - from 2003 to the time of its collapse. Using a new process, they were able to detect millimeter-size changes to the bridge over time that would not have been detected by the standard processing approaches applied to spaceborne synthetic aperture radar observations.

They found that the deck next to the bridge's collapsed pier showed subtle signs of change as early as 2015; they also noted that several parts of the bridge showed a more significant increase in structural changes between March 2017 and August 2018 - a hidden indication that at least part of the bridge may have become structurally unsound.

"This is about developing a new technique that can assist in the characterization of the health of bridges and other infrastructure," Millilo said. "We couldn't have forecasted this particular collapse because standard assessment techniques available at the time couldn't detect what we can see now. But going forward, this technique, combined with techniques already in use, has the potential to do a lot of good."

The technique is limited to areas that have consistent synthetic aperture radar-equipped satellite coverage. In early 2022, NASA and the Indian Space Research Organization (ISRO) plan to launch the NASA-ISRO Synthetic Aperture Radar (NISAR), which will greatly expand that coverage. Designed to enable scientists to observe and measure global environmental changes and hazards, NISAR will collect imagery that will enable engineers and scientists to investigate the stability of structures like bridges nearly anywhere in the world about every week.

"We can't solve the entire problem of structural safety, but we can add a new tool to the standard procedures to better support maintenance considerations," said Milillo.

The majority of the SAR data for this study was acquired by the Italian Space Agency's COSMO-Skymed constellation and the European Space Agency's (ESA's) Sentinel-1a and -1b satellites. The research team also used historical data sets from ESA's Envisat satellite. The study was recently published in the journal Remote Sensing. 33)




Atomic motion captured in 4-D for the first time

27 June 2019: Everyday transitions from one state of matter to another—such as freezing, melting or evaporation—start with a process called "nucleation," in which tiny clusters of atoms or molecules (called "nuclei") begin to coalesce. Nucleation plays a critical role in circumstances as diverse as the formation of clouds and the onset of neurodegenerative disease. 34)

A UCLA-led team has gained a never-before-seen view of nucleation—capturing how the atoms rearrange at 4-D atomic resolution (that is, in three dimensions of space and across time). The findings, published in the journal Nature, differ from predictions based on the classical theory of nucleation that has long appeared in textbooks. 35)

"This is truly a groundbreaking experiment—we not only locate and identify individual atoms with high precision, but also monitor their motion in 4-D for the first time," said senior author Jianwei "John" Miao, a UCLA professor of physics and astronomy, who is the deputy director of the STROBE National Science Foundation Science and Technology Center and a member of the California NanoSystems Institute at UCLA.

Research by the team, which includes collaborators from Lawrence Berkeley National Laboratory, University of Colorado at Boulder, University of Buffalo and the University of Nevada, Reno, builds upon a powerful imaging technique previously developed by Miao's research group. That method, called "atomic electron tomography," uses a state-of-the-art electron microscope located at Berkeley Lab's Molecular Foundry, which images a sample using electrons. The sample is rotated, and in much the same way a CAT scan generates a three-dimensional X-ray of the human body, atomic electron tomography creates stunning 3D images of atoms within a material.

Miao and his colleagues examined an iron-platinum alloy formed into nanoparticles so small that it takes more than 10,000 laid side by side to span the width of a human hair. To investigate nucleation, the scientists heated the nanoparticles to 520 º Celsius ( 968º Fahrenheit), and took images after 9 minutes, 16 minutes and 26 minutes. At that temperature, the alloy undergoes a transition between two different solid phases.

Technologies_Auto29

Figure 26: The image shows 4D atomic motion is captured in an iron-platinum nanoparticle at three different annealing times. The experimental observations are inconsistent with classical nucleation theory, showing the need of a model beyond




SUN-to-LIQUID (Fuels from concentrated sunlight)

June 2019: The EU (European Union) energy roadmap for 2050 aims at a 75% share of renewables in the gross energy consumption. Achieving this target requires a significant share of alternative transportation fuels, including a 40% target share of low carbon fuels in aviation. 36) Therefore the European Commission calls for the development of sustainable fuels from non-biomass non-fossil sources.

In contrast to biofuels, solar energy is undisputedly scalable to any future demand and is already utilized at large scale to produce heat and electricity. Solar energy may also be used to produce hydrogen, but the transportation sector cannot easily replace hydrocarbon fuels, with aviation being the most notable example. Due to long design and service times of aircraft the aviation sector will critically depend on the availability of liquid hydrocarbons for decades to come . 37) Heavy duty trucks, maritime and road transportation are also expected to rely strongly on liquid hydrocarbon fuels. 38) Thus, the large volume availability of ‘drop-in’ capable renewable fuels is of great importance for decarbonizing the transport sector.

This challenge is addressed by the four year solar fuels project SUN-to-LIQUID kicked off in January 2016.

The European H2020 project aims at developing a solar thermochemical technology as a highly promising fuel path at large scale and competitive costs.

Solar radiation is concentrated by a heliostat field and efficiently absorbed in a solar reactor that thermochemically converts H2O and CO2 to syngas which is subsequently processed to Fischer-Tropsch hydro-carbon fuels. Solar-to-syngas energy conversion efficiencies exceeding 30% can potentially be realized (39)) thanks to favorable thermodynamics at high temperature and utilization of the full solar spectrum . 40)

Expected Innovations

The following key innovations are expected from the SUN-to-LIQUID project:

• Advanced modular solar concentration technology for high-flux/high-temperature applications.

• Modular solar reactor technology for the thermochemical production of syngas from H2O and CO2 at field scale and with record-high solar energy conversion efficiency.

• Optimization of high-performance redox materials and reticulated porous ceramic (RPC) structures favorable thermodynamics, rapid kinetics, stable cyclic operation, and efficient heat and mass transfer.

• Pre-commercial integration of all subsystems of the process chain to solar liquid fuels, namely: the high-flux solar concentrator, the solar thermochemical reactor, and the gas-to-liquid conversion unit.

Objectives

SUN-to-LIQUID will design, fabricate, and experimentally validate a large-scale, complete solar fuel production plant.

The preceding EU-project SOLAR-JET has recently demonstrated the first-ever solar thermochemical kerosene production from H2O and CO2 in a laboratory environment. 41) A total of 291 stable redox cycles were performed, yielding 700 standard liters of high-quality syngas, which was compressed and further processed via Fischer-Tropsch synthesis to a mixture of naphtha, gasoil, and kerosene. 42)

As a follow-up project, SUN-to-LIQUID will design, fabricate, and experimentally validate a more than 12-fold scale-up of the complete solar fuel production plant and will establish a new milestone in reactor efficiency. The field validation will integrate for the first time the whole production chain from sunlight, H2O and CO2 to liquid hydrocarbon fuels.

Technologies_Auto28

Figure 27: SUN-to-LIQUID will realize three subsystems (image credit: EC)

1) A high-flux solar concentrating subsystem — Consisting of a sun-tracking heliostat field, that delivers radiative power to a solar reactor positioned at the top of a small tower.

2) A 50 kW solar thermochemical reactor subsystem — For syngas production from H2O and CO2 via the ceria-based thermochemical redox cycle, with optimized heat transfer, fluid mechanics, material structure, and redox chemistry.

3) A gas-to-liquid conversion subsystem — Comprising compression and storage units for syngas and a dedicated micro FT unit for the synthesis of liquid hydrocarbon fuels.

SUN-to-LIQUID will run a long-term operation campaign: SUN-to-LIQUID will parametrically optimize the solar thermochemical fuel plant on a daily basis over the time scale of months under realistic steady-state and transient conditions relevant to large-scale industrial implementation.


Concept and Approach

The SUN-to-LIQUID approach uses concentrated solar energy to synthesize liquid hydrocarbon fuels from H2O and CO2. This reversal of combustion is accomplished via a high-temperature thermochemical cycle based on metal oxide redox reactions which convert H2O and CO2 into energy-rich synthesis gas (syngas), a mixture of mainly H2 and CO.43) This two-step cycle for splitting H2O and CO2 is schematically represented by:

The thermochemical process

Since H2/CO and O2 are formed in different steps, the problematic high-temperature fuel/O2 separation is eliminated. The net product is high-quality synthesis gas (syngas), which is further processed to liquid hydrocarbons via Fischer-Tropsch (FT) synthesis. FT synthetic paraffinic kerosene derived from syngas is already certified for aviation.

SUN-to-LIQUID uses concentrated solar radiation as the source of high-temperature process heat to drive endothermic chemical reactions for solar fuel production. 44) A variety of redox active materials have been explored by different research groups. 45) Among them, non-stoichiometric cerium oxide (ceria) has emerged as an attractive redox active material because of its high oxygen ion conductivity and cyclability, while maintaining its fluorite-type structure and phase.

Reactor configuration

The laboratory-scale solar reactor for a radiative power input of 4 kW has been designed, fabricated, and experimentally demonstrated at ETH Zurich. The reactor configuration, which was used in the FP7-project SOLAR-JET, is schematically shown in Figure 28.

It consists of a cavity receiver containing a reticulated porous ceramic (RPC) foam-type structure made of pure CeO2 that was directly exposed to concentrated solar radiation. The production of H2 from H2O, CO from CO2, and high quality syngas suitable for FT synthesis by simultaneously splitting a mixture of H2O and CO2 has been demonstrated (Ref. 42).

The main objective of SUN-to-LIQUID is the scale-up and experimental demonstration of the complete process chain to solar liquid fuels from H2O and CO2 at a pre-commercial size, i.e. moving from a 4 kW setup in the laboratory to a 50 kW pre-commercial plant in the field. SUN-to-LIQUID will demonstrate an enhanced solar-to-fuel energy conversion efficiency and validate the field suitability.

Technologies_Auto27

Figure 28: Schematic of the reactor configuration in the FP7-project SOLAR-JET (image credit: FC)

SUN-to-LIQUID will demonstrate an enhanced solar-to-fuel energy conversion efficiency and validate the field suitability.

The high-flux solar concentrating subsystem consists of an ultra-modular solar heliostat central receiver that provides intense solar radiation for high temperature applications beyond the capabilities of current commercial CSP installations. This subsystem is constructed at IMDEA Energía at Móstoles Technology Park, Madrid, in 2016. The customized heliostat field makes use of most recent developments on small size heliostats and a tower with reduced height (15 m) to minimize visual impact. The heliostat field consists of 169 small size heliostats (1.9 m x 1.6 m). When all heliostats are aligned, it is possible to fulfil the specified flux above 2500 kW/m2 for at least 50 kW and an aperture of 16 cm, with a peak flux of 3000 kW/m2. A reliable road map for competitive drop-in fuel production from H2O, CO2, and solar energy will be established.


Figure 29: The SUN-to-LIQUID project develops an alternative fuel technology that promises unlimited renewable transportation fuel supply from water, CO2 and concentrated sunlight. The project, which is funded by the EU and Switzerland, can have important implications for the transportation sectors, especially for the long-haul aviation and shipping sectors, which are strongly dependent on hydrocarbon fuels (video credit: ARTTIC, Published on 12 June 2019)


SUN-to-LIQUID Field Test Project

The SUN-to-LIQUID four-year project, which finishes at the end of this year, is supported by the EU’s Horizon 2020 research and innovation program and the Swiss State Secretariat for Education, Research and Innovation. It involves leading European research organizations and companies in the field of solar thermochemical fuel research. In addition to ETH Zurich, IMDEA Energy and HyGear Technology & Services, other partners include the German Aerospace Center (DLR) and Abengoa Energía. Project coordinator Bauhaus Luftfahrt is also responsible for technology and system analyses and ARTTIC International Management Services is supporting the consortium with project management and communication. 46)

The preceding EU-project SOLAR-JET developed the technology and achieved the first-ever production of solar jet fuel in a laboratory environment. The SUN-to-LIQUID project scaled up this technology for on-sun testing at a solar tower. For that purpose, a unique solar concentrating plant was built at the IMDEA Energy Institute in Móstoles, Spain. “A sun-tracking field of heliostats concentrates sunlight by a factor of 2500 – three times greater than current solar tower plants used for electricity generation,” explains Manuel Romero of IMDEA Energy. This intense solar flux, verified by the flux measurement system developed by the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) makes it possible to reach reaction temperatures of more than 1500 ºC within the solar reactor positioned at the top of the tower. 47)

Technologies_Auto26

Figure 30: The sun-tracking heliostat field delivers radiative power to a solar reactor positioned at the top of the tower (image credit: Christophe Ramage ©ARTTIC 2019)

The solar reactor, developed by project partner ETH Zurich, produces synthesis gas, a mixture of hydrogen and carbon monoxide, from water and carbon dioxide via a thermochemical redox cycle. An on-site gas-to-liquid plant that was developed by the project partner HyGear processes this gas to kerosene.

DLR has many years of experience in the development of solar-thermal chemical processes and their components. In the SUN-to-LIQUID project, DLR was responsible for measuring the solar field and concentrated solar radiation, for developing concepts for optimized heat recovery and – as in the previous SOLAR-JET project – for computer simulations of the reactor and the entire plant. Researchers from the DLR Institute of Solar Research and the DLR Institute of Combustion Technology used virtual models to scale up the solar production of kerosene from the laboratory to a megawatt-scale plant and to optimize the design and operation of the plant. For SUN-to-LIQUID, DLR solar researchers developed a flux density measurement system that makes it possible to measure the intensity of highly concentrated solar radiation directly in front of the reactor with minimal interruption of its operation. This data is necessary to operate the plant safely and to determine the efficiency of the reactor.

Unlimited supply of sustainable fuel: Compared to conventional fossil-derived jet fuel, the net carbon dioxide emissions to the atmosphere can be reduced by more than 90 percent. Furthermore, since the solar energy-driven process relies on abundant feedstock and does not compete with food production, it can thus meet the future fuel demand at a global scale without the need to replace the existing worldwide infrastructure for fuel distribution, storage, and utilization.