Jump to content

Lurker

Moderators
  • Posts

    5,584
  • Joined

  • Last visited

  • Days Won

    876

Everything posted by Lurker

  1. great news, but I hope the future is brighter if they decide to make it free, coz some software went downhill after set to free. btw, news page is here; https://www.clarku.edu/centers/geospatial-analytics/2024/08/27/announcement-terrset-liberagis/
  2. The Role of a GIS Portfolio: More Than Just a Resume A resume provides a snapshot of your education, skills, and experience, but a GIS portfolio offers a deeper dive into what you can actually do. It's the difference between telling and showing. While a resume might list "proficiency in ArcGIS" as a skill, a portfolio can demonstrate this proficiency through detailed examples of projects you've completed, maps you've created, and problems you've solved using GIS technology. Your GIS portfolio should include a variety of work samples that highlight your capabilities across different areas of GIS. This might include: Maps and Visualizations: High-quality maps that demonstrate your ability to analyze spatial data and present it in a clear, compelling manner. Project Descriptions: Detailed write-ups of the projects you've worked on, including the challenges you faced, the solutions you implemented, and the impact of your work. Data Analysis: Examples of your ability to analyze and interpret spatial data, using tools such as ArcGIS, or other GIS software. Programming and Automation: If applicable, include scripts or code snippets that show your ability to automate GIS tasks or perform advanced spatial analysis. By including these elements, your portfolio becomes a powerful tool that not only highlights your technical skills but also tells the story of your professional journey in GIS. Building Your Portfolio: A Step-by-Step Guide Creating a GIS portfolio might seem daunting, especially if you're early in your career and don't yet have a wealth of experience to draw from. However, with a strategic approach, you can build a portfolio that effectively showcases your potential. 1) Start with What You Have Don't wait until you've accumulated years of experience before you start building your portfolio. Start with the projects you've completed during your education or any internships you've done. Even classroom assignments can be valuable portfolio pieces if they demonstrate your skills and your ability to solve real-world problems. 2) Choose a Platform Your GIS portfolio needs a home, and there are several platforms you can use to create it. Websites like GitHub, Behance, or even a personal website can serve as a platform for your portfolio. Esri’s ArcGIS StoryMaps, ArcGIS Experience Builder, or ArcGIS Hub are excellent tools that allows you to create interactive, visually compelling narratives that showcase your work. 3) Showcase a Variety of Skills When selecting projects for your portfolio, aim for diversity. Include projects that demonstrate your proficiency with different GIS tools and techniques, from spatial analysis and geocoding to data visualization and programming. This not only shows potential employers the breadth of your skills but also your adaptability in different areas of GIS. 4) Provide Context A map or a data visualization on its own might look impressive, but without context, it's just a pretty picture. For each project in your portfolio, provide a brief description that explains the problem you were trying to solve, the methods you used, and the results you achieved. This context is crucial for helping potential employers understand the impact of your work. 5) Keep It Updated Your portfolio should be a living document that evolves as your career progresses. Make it a habit to update your portfolio regularly with new projects and skills. This not only keeps your portfolio fresh but also serves as a reminder of your growth and accomplishments in the field. Leveraging Your Portfolio: How to Use It Effectively Once you've built your GIS portfolio, the next step is to leverage it in your job search and career development. Here are some strategies for making the most of your portfolio: 1) Use It in Job Applications When applying for GIS positions, include a link to your portfolio in your resume and cover letter. This allows potential employers to see firsthand what you can do, rather than just reading about it. 2) Bring It to Interviews In an interview, your portfolio can be a powerful tool for demonstrating your skills and experience. Consider bringing a tablet or laptop to the interview so you can walk the interviewer through your portfolio and discuss the projects in detail. 3) Share It on Professional Networks Platforms like LinkedIn are great for sharing your portfolio with a wider audience. Post updates about new projects you’ve added to your portfolio and include a link to your portfolio in your LinkedIn profile. This increases your visibility and can attract potential employers or collaborators. 4) Use It for Networking When networking at conferences or industry events, your portfolio can serve as a conversation starter. Whether you’re talking to potential employers or peers in the industry, being able to show them your work can leave a lasting impression. In the competitive and ever-evolving field of GIS, having a well-crafted portfolio is not just an option—it’s a necessity. A strong GIS portfolio serves as a powerful tool for showcasing your skills, telling your professional story, and navigating your career path. Whether you’re just starting out or looking to make a career transition, your portfolio can help you stand out, demonstrate your value, and open doors to new opportunities in the geospatial industry.
  3. The summer holidays are ending, which for many concludes with a long drive home and reliance on GPS devices to get safely home. But every now and then, GPS devices can suggest strange directions or get briefly confused about your location. But until now, no one knew for sure when the satellites were in a good enough position for the GPS system to give reliable direction. TU/e's Mireille Boutin and her co-worker Gregor Kemper at the Technical University of Munich have turned to mathematics to help determine when your GPS system has enough information to determine your location accurately. The research is published in the journal Advances in Applied Mathematics. "In 200 meters, turn right." This is a typical instruction that many have heard from their global positioning system (GPS). Without a doubt, advancements in GPS technologies and mobile navigation apps have helped GPS play a major role in modern car journeys. But, strictly adhering to instructions from GPS devices can lead to undesirable situations. Less serious might be turning left instead of right, while more serious could be driving your car into a harbor—just as two tourists did in Hawaii in 2023. The latter incident is very much an exception to the rule, and one might wonder: "How often does this happen and why?" GPS and your visibility "The core of the GPS system was developed in the mid-1960s. At the time, the theory behind it did not provide any guarantee that the location given would be correct," says Boutin, professor at the Department of Mathematics and Computer Science. It won't come as a surprise then to learn that calculating an object's position on Earth relies on some nifty mathematics. And they haven't changed much since the early days. These are at the core of the GPS system we all use. And it deserved an update. So, along with her colleague Gregor Kemper at the Technical University of Munich, Boutin turned to mathematics to expand on the theory behind the GPS system, and their finding has recently been published in the journal Advances in Applied Mathematics. How does GPS work? Before revealing Boutin and Kemper's big finding, just how does GPS work? Global positioning is all about determining the position of a device on Earth using signals sent by satellites. A signal sent by a satellite carries two key pieces of information—the position of the satellite in space and the time at which the position was sent by the satellite. By the way, the time is recorded by a very precise clock on board the satellite, which is usually an atomic clock. Thanks to the atomic clock, satellites send very accurate times, but the big issue lies with the accuracy of the clock in the user's device—whether it's a GPS navigation device, a smartphone, or a running watch. "In effect, GPS combines precise and imprecise information to figure out where a device is located," says Boutin. "GPS might be widely used, but we could not find any theoretical basis to guarantee that the position obtained from the satellite signals is unique and accurate." Google says 'four' If you do a quick Google search for the minimum number of satellites needed for navigation with GPS, multiple sources report that you need at least four satellites. But the question is not just how many satellites you can see, but also what arrangements can they form? For some arrangements, determining the user position is impossible. But what arrangements exactly? That's what the researchers wanted to find out. "We found conjectures in scientific papers that seem to be widely accepted, but we could not find any rigorous argument to support them anywhere. Therefore, we thought that, as mathematicians, we might be able to fill that knowledge gap," Boutin says. To solve the problem, Boutin and Kemper simplified the GPS problem to what works best in practice: equations that are linear in terms of the unknown variables. "A set of linear equations is the simplest form of equations we could hope for. To be honest, we were surprised that this simple set of linear equations for the GPS problem wasn't already known," Boutin adds. The problem of uniqueness With their linear equations ready, Boutin and Kemper then looked closely at the solutions to the equations, paying special attention as to whether the equations gave a unique solution. "A unique solution implies that the only solution to the equations is the actual position of the user," notes Boutin. If there is more than one solution to the equations, then only one is correct—that is, the true user position—but the GPS system would not know which one to pick and might return the wrong one. The researchers found that nonunique solutions can emerge when the satellites lie in a special structure known as a "hyperboloid of revolution of two sheets." "It doesn't matter how many satellites send a signal—if they all lie on one of these hyperboloids then it's possible that the equations can have two solutions, so the one chosen by the GPS could be wrong," says Boutin. But what about the claim that you need at least four satellites to determine your position? "Having four satellites can work, but the solution is not always unique," points out Boutin. Why mathematics matters For Boutin, this work demonstrates the power and application of mathematics. "I personally love the fact that mathematics is a very powerful tool with lots of practical applications," says Boutin. "I think people who are not mathematicians may not see the connections so easily, and so it is always nice to find clear and compelling examples of everyday problems where mathematics can make a difference." Central to Boutin and Kemper's research is the field of algebraic geometry in which abstract algebraic methods are used to solve geometrical, real-world problems. "Algebraic geometry is an area of mathematics that is considered very abstract. I find it nice to be reminded that any piece of mathematics, however abstract it might be, may turn out to have practical applications at some point," says Boutin.
  4. At least 30 people have been killed following the collapse of a dam in Sudan's northwest Red Sea State, according to the United Nations's emergency relief agency. Hundreds more are believed missing, Reuters reported. Flash flooding decimated 20 villages and damaged a further 50 after the Arba'at Dam collapsed Sunday, the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) said. It estimated 50,000 people had been "severely affected" by the disaster. In the villages of Khor-Baraka and Tukar, residents were reportedly forced to flee for safety, OCHA also said, citing local officials. It added that the final death toll could rise significantly. Agence France-Presse (AFP) footage of the aftermath shows industrial trucks buried in mud and debris, some laden with crates and personal belongings. Other vehicles are almost unrecognizable on the silty riverbank. One resident who lived near the dam, Moussa Mohamad Moussa, described in another video from AFP how "the dam broke and… the water swept away around 40 people." "In the area where I'm from, the Tabub area… they told me that all the houses and everything was swept away," he said. Local media report that the dam burst on Saturday night following heavy rains, but exact details have been difficult to gather due to mobile network outages. Arbaat, 40km (25 miles) north of Port Sudan, is part of Sudan’s system of dams that help manage floodwaters and is where the two upper branches of the river Nile meet in Sudan. The country has been dealing with heavy rainfall and floods since the end of June, with the UN Office for the Coordination of Humanitarian Affairs (OCHA) saying that the harsh weather has affected an estimated 317,000 people (56,450 families) across 16 states. The ministry of health said on Monday that the death toll from flooding across the country had risen to 132. The most affected states include North Darfur, the River Nile, and West Darfur, OCHA reported.
  5. Bayanat, a leading provider of AI-powered geospatial solutions and a subsidiary of G42, confirmed that the launch of the Synthetic Aperture Radar (SAR) satellite, titled "Foresight-1", is a significant achievement that reinforces the UAE's global leadership in the space sector, as it is the first satellite of the UAE's Earth Observation Space Programme. Hasan Al Hosani, Managing Director of Bayanat, told the Emirates News Agency (WAM) that Foresight-1 places the UAE among the prestigious list of 20 countries around the world that operate SAR space assets, which strengthens its position in the space sector and supports its growing capabilities in this field. He pointed out that the strategic roadmap drawn up by Bayanat and Al Yah Satellite Communications Company (Yahsat), is based on deploying a constellation of satellites with SAR technology in the near future. He explained that since the initial announcement of the launch of the Earth Observation Space Programme in 2023, the two companies have been implementing the strategic plan for the Earth Observation System, starting with the Foresight-1 satellite. He added, "After the successful launch of the Foresight-1 satellite, we are now able to operate space assets prepared to cross over the Middle East region repeatedly and in record time." He stated that what distinguishes the Foresight-1 satellite is that it provides continuous, high-resolution monitoring solutions, using SAR technology, which is an active sensing system that illuminates the Earth's surface and measures the reflected signal to provide high-resolution images. Unlike traditional optical imaging satellites, SAR satellites can capture images day or night regardless of weather conditions or the reflection of sunlight. He said that this technology will enhance the quality of geospatial solutions and services provided by Bayanat and Yahsat, in addition to enhancing capabilities in disaster management, marine monitoring, and smart mobility applications. He pointed out that Emirati citizens constituted more than 30 per cent of the Earth Observation Space Programme, reflecting the commitment to developing highly qualified national cadres in one of the most vital sectors. The merger between Bayanat and Yahsat is expected to be completed before the end of this year, subject to obtaining final approvals from regulatory authorities in the UAE and internationally. He pointed out that the merger contributes to establishing "Space42" as a leading Emirati company in the space sector with a global footprint, supporting the country's efforts to achieve the directions of the National Space Strategy 2030. Al Hosani stressed that Space42 will continue on the same path to achieve the goals of the National Space Strategy 2030, and support the country's efforts to develop its space capabilities, support national security, enhance innovation, encourage international cooperation, drive economic growth, and enhance urban development through space technology.
  6. In preparation for liftoff on 4 September 2024 (3 September Kourou time), the Vega–Sentinel-2C upper-composite has been hoisted into the launch tower at Europe’s Spaceport. The Sentinel-2 mission is based on a constellation of two identical satellites, Sentinel-2A (launched in 2015) and Sentinel-2B (launched in 2017), flying in the same orbit but 180° apart to optimise coverage and revisit time. Each satellite carries a high-resolution multispectral imager to deliver optical images from the visible to the shortwave-infrared region of the electromagnetic spectrum. From the altitude of 786 km, the satellites provide images in 13 spectral bands with resolutions of 10, 20 and 60 m over a large swath width of 290 km. Data collected from Sentinel-2 are used for a wide range of applications, including precision farming, water quality monitoring, natural disaster management and methane emission detection. Sentinel-2C launches on Vega, Europe’s nimble rocket specialising in launching small scientific and Earth observation spacecraft such as to sun-synchronous polar orbits, following the Sun. At 30 m tall, Vega weighs 137 tonnes on the launch pad and reaches orbit with three solid-propellant powered stages before the fourth liquid-propellant stage takes over for precise placement of Sentinel-2C into its orbit. By rocket standards Vega is light-weight and powerful, the first three stages burn through their fuel and bring Vega and its satellite to space in just seven minutes. Once in orbit, Sentinel-2C will replace its predecessor, Sentinel-2A, while Sentinel-2D will later replace Sentinel-2B. ESA - Sentinel-2C in the Vega launch tower
  7. Images gathered by the UK military’s first satellite will be shared with allies, the Ministry of Defence (MoD) has said. The department said the war in Ukraine had shown that the use of space is “crucial” to military operations. The satellite, named Tyche, was launched on Friday from a rocket owned by SpaceX, the company co-founded by technology entrepreneur and billionaire Elon Musk. Along with military information, it is intended that data from the satellite will be accessible by other UK Government departments for uses including environmental disaster monitoring, mapping information development and tracking the impact of climate change globally, according to the MoD Tyche, which is comparable in size to a washing machine, was designed and built in the UK through a £22 million contract awarded to Surrey Satellites Technology Limited (SSTL) and is the first satellite to be fully owned by the MoD. SSTL received the first signals from Tyche a few hours after lift-off that confirmed the successful launch from Vandenberg space force base, California, on a SpaceX Falcon 9 rocket as part of the Transporter 11 mission. Over a five-year life span, the 150kg satellite will provide imagery to support the UK armed forces and is the first to be launched by the MoD out of a constellation of satellites under its space-based Intelligence, Surveillance and Reconnaissance (ISR) programme. Maria Eagle, minister for defence procurement and industry, said: “Tyche will provide essential intelligence for military operations as well as supporting wider tasks across government. “Tyche also shows the UK’s commitment to support innovation in science and technology, stimulating growth across the sector and supporting highly-skilled jobs in the UK.” The MoD said the design and build of Tyche had supported about 100 high-skilled roles at SSTL since 2022. UK Space Commander Major General Paul Tedman said: “This is a fabulous day for UK space. “The successful launch of Tyche has shown that UK Space Command, and its essential partners across defence and industry, can rapidly take a concept through to the delivery of a satellite capability on orbit. “Tyche represents the first of a future constellation of Intelligence, Surveillance and Reconnaissance satellites that we’ll launch over the coming years. “I’d like to take this opportunity to congratulate everybody involved with Tyche and thank them for their support.” Defence equipment and support space team leader Paul Russell described the project as an “exciting journey”. He said: “To see Tyche – the first of a new generation of UK military capabilities – delivered into orbit is an incredibly proud moment and a tribute to everyone’s commitment to this key project.”
  8. Everything from drones to airplanes, ships, and cars are equipped with GPS units to help them navigate around the world. This information is crucial not only for powering autonomous navigation systems, but also for supplying human operators with the information they need to get where they are going. But while this technology has become essential in the modern world, our reliance on it is somewhat concerning. In some locations GPS signals are blocked by obstructions, so the systems that rely on them are useless. Worse yet, GPS signals can be intentionally spoofed or jammed, which could lead to widespread chaos and tragedy. These problems could be averted by using self-contained motion sensors rather than signals from a constellation of satellites. But that would require motion sensors thousands of times more accurate than the types that we have in our smartphones and other consumer electronics. The technology does exist today, but in order to be accurate enough to replace GPS, a quantum inertial measurement unit is the only option available. These systems require six atom interferometers, with each being large enough to fill a small room. That is not exactly practical for the vast majority of applications, and as you might expect, these systems are also extremely expensive. Researchers at Sandia National Laboratories have been working on a much more compact atom interferometer, however, which could make precise, GPS-free navigation a practical reality in the near future. The new system is based on Photonic Integrated Circuits (PICs), which make it significantly more compact than the traditional laser systems used in atom interferometers. The new technology is also more resistant to vibrations and shocks, making it ideal for use in challenging environments. PICs are small, durable chips that can perform the same functions as larger, more complex laser systems. These chips integrate various components — like modulators and amplifiers — onto a single platform, making the entire system smaller, more robust, and easier to produce. One key innovation is the development of a silicon photonic modulator, which is crucial for controlling the light in these systems. This modulator allows the system to generate and manage multiple laser frequencies from a single source, eliminating the need for multiple lasers. These novel modulators were also noted to substantially reduce unwanted echoes called sidebands that plague existing technologies. The result is a compact, high-performance laser system that can be used in a variety of advanced applications, including quantum sensors like atomic clocks and gyroscopes. Overall, this represents a significant step forward in making these advanced sensing technologies more practical and deployable in real-world situations. The team also pointed out that the applications of their technology extend well beyond navigation. These sensors could, for example, be used to locate natural resources hidden beneath the ground by observing how they alter Earth’s gravitational force. Further potential applications exist in enhancing LIDAR sensors, quantum computing, and optical communications.
  9. When two Finnair planes flying into Estonia recently had to divert in quick succession and return to Helsinki, the cause wasn’t a mechanical failure or inclement weather—it was GPS denial. GPS denial is the deliberate interference of the navigation signals used and relied on by commercial aircraft. It’s not a new phenomenon: The International Air Transport Association (IATA) has long provided maps of regions where GPS was routinely unavailable or untrusted. However, concern is growing rapidly as conflict spreads across Europe, the Middle East, and Asia, and GPS jamming and spoofing become weapons of economic and strategic influence. Several adversarial nations have been known to use false (spoofed) GPS signals to interfere with air transit, shipping, and trade, or to disrupt military logistics in conflict zones. And recent discussions of anti-satellite weapons renewed fears of deliberate actions designed to wreak economic havoc by knocking out GPS. GPS has become so ubiquitous in our daily lives that we hardly think about what happens when it’s not available A GPS outage would result in many online services becoming unavailable (these rely on GPS-based network synchronization), failure of in-vehicle Satnav, and no location-based services on your mobile phone. Analyses in the U.S. and U.K. have both identified the temporary economic cost of an outage at approximately $1 billion per day—but the strategic impacts can be even more significant, especially in a conflict. The saying is that infantry wins battles, but logistics wins wars. It’s almost unimaginable to operate military logistics supply chains without GPS given the heavy reliance on synchronized communications networks, general command and control, and vehicle and materiel positioning and tracking. All of these centrally rely on GPS and all are vulnerable to disruption. Most large military and commercial ships and aircraft carry special GPS backups for navigation because there was, in fact, a time before GPS GPS is not available in all settings—underground, underwater, or at high latitudes. The GPS alternatives rely on signals that can be measured locally (for instance, motion or magnetic fields as used in a compass), so a vessel can navigate even when GPS is unavailable or untrusted. For example, inertial navigation uses special accelerometers that measure vehicle motion, much like the ones that help your mobile phone reorient when you rotate it. Measuring how the vehicle is moving and using Newton’s laws allows you to calculate your likely position after some time. Other “alt-PNT” approaches leverage measurements of magnetic and gravitational fields to help navigate against a known map of these variations near the Earth’s surface. Plus, ultrastable locally deployed clocks can ensure communications networks remain synchronized during GPS outages (comms networks typically rely on GPS timing signals to remain synchronized). Nonetheless, we rely on GPS because it’s simply much better than the backups. Focusing specifically on positioning and navigation, achieving good performance with conventional alternatives typically requires you to significantly increase system complexity, size, and cost, limiting deployment options on smaller vehicles. Those alternative approaches to navigation are also unfortunately prone to errors due to the instability of the measurement equipment in use—signals just gradually change over time, with varying environmental conditions, or with system age. We keep today’s alternatives in use to provide a backstop for critical military and commercial applications, but the search is on for something significantly better than what’s currently available. That something looks to be quantum-assured navigation, powered by quantum sensors. Quantum sensors rely on the laws of nature to access signatures that were previously out of reach, delivering both extreme sensitivity and stability As a result, quantum-assured navigation can deliver defense against GPS outages and enable transformational new missions. The most advanced quantum-assured navigation systems combine multiple sensors, each picking up unique environmental signals relevant to navigation, much the way autonomous vehicles combine lidar, cameras, ultrasonic detectors, and more to deliver the best performance. This starts with a new generation of improved quantum inertial navigation, but quantum sensing allows us to go further by accessing new signals that were previously largely inaccessible in real-world settings. While it may be surprising, Earth’s gravity and magnetic fields are not constant everywhere on the planet’s surface. We have maps of tiny variations in these quantities that have long been used for minerals prospecting and even underground water monitoring. We can now repurpose these maps for navigation. We’re building a new generation of quantum gravimeters, magnetometers, and accelerometers—powered by the quantum properties of atoms to be sensitive and compact enough to measure these signals on real vehicles. The biggest improvements come from enhanced stability. Atoms and subatomic particles don’t change, age, or degrade—their behavior is always the same. That’s something we are now primed to exploit. Using a quantum-assured navigation system, a vehicle may be able to position itself precisely even when GPS is not available for very long periods. Not simply hours or days as is achievable with the best military systems today, but weeks or months. In quantum sensing, we have already achieved quantum advantage—when a quantum solution decidedly beats its conventional counterparts. The task at hand is now to take these systems out of the lab and into the field in order to deliver true strategic advantage. That’s no mean feat. Real platforms are subject to interference, harsh conditions, and vibrations that conspire to erase the benefits we know quantum sensors can provide. In recent cutting-edge research, new AI-powered software can be used to deliver the robustness needed to put quantum sensors onto real moving platforms. The right software can keep the systems functional even when they’re being shaken and subjected to interference on ships and aircraft. To prevent a repeat of the Finnair event, real quantum navigation systems are now starting to undergo field testing. Our peers at Vector Atomic recently ran maritime trials of a new quantum optical clock. The University of Birmingham published measurements with a portable gravity gradiometer in the field. At Q-CTRL, we recently announced the world’s first maritime trial of a mobile quantum dual gravimeter for gravity map matching at a conference in London. My team is excited to now work with Airbus, which is investigating software-ruggedized quantum sensors to provide the next generation of GPS backup on commercial aircraft. Our full quantum navigation solutions are about to commence flight safety testing with the first flights later in the year, following multiple maritime and terrestrial trials. With a new generation of quantum sensors in the field, we’ll be able to ensure the economy keeps functioning even in the event of a GPS outage. From autonomous vehicles to major shipping companies and commercial aviation, quantum-assured navigation is the essential ingredient in providing resilience for our entire technology-driven economy.
  10. A Falcon 9 successfully launched an Earth science mission for Europe and Japan May 28 as part of the European Space Agency’s ongoing, if temporary, reliance on SpaceX for space access. The Falcon 9 lifted off from Vandenberg Space Force Base in California at 6:20 p.m. Eastern. The payload, the Earth Cloud Aerosol and Radiation Explorer (EarthCARE) spacecraft, separated from the upper stage about 10 minutes after liftoff. Simonetta Cheli, director of Earth observation programs at ESA, said in a post-launch interview that controllers were in contact with the spacecraft. “It is all nominal and on track.” Spacecraft controllers will spend the weeks and months ahead checking out the spacecraft’s instruments and calibrating them, she said. That will allow the first release of science data from EarthCARE around the end of this year or early next year. EarthCARE is an 800-million-euro ($870 million) ESA-led mission to study clouds and aerosols in the atmosphere. The spacecraft carries four instruments, including a cloud profiling radar provided by the Japanese space agency JAXA at a cost of 8.3 billion yen ($53 million). JAXA dubbed the spacecraft Hakuryu or “White Dragon” because of the spacecraft’s appearance. The 2,200-kilogram spacecraft, flying in a sun-synchronous orbit at an altitude of 393 kilometers, will collect data on clouds and aerosols in the atmosphere, along with imagery and measurements of reflected sunlight and radiated heat. That information will be used for atmospheric science, including climate and weather models. “EarthCARE is there to study the effect of clouds and aerosols on the thermal balance of the Earth,” said Dirk Bernaerts, ESA’s EarthCARE project manager, at a pre-launch briefing May 21. “It’s very important to observe them all together at the same location at the same time. That is what is unique about this spacecraft.” Other spacecraft make similar measurements, including NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) spacecraft launched in February. “The observation techniques are different,” he said. “We observe the same thing but observe slightly different aspects of the clouds and aerosols.” He added that EarthCARE would use PACE data to help with calibration and validation of its observations. Development of EarthCARE took about two decades and resulted in cost growth that Cheli estimated at the pre-launch briefing to be 30%. Maximilian Sauer, EarthCARE project manager at prime contractor Airbus, said several factors contributed to the delays and overruns, including technical issues with the instruments as well as effects of the pandemic. One lesson learned from EarthCARE, Cheli said in the post-launch interview, was the need for “strict management” of the project, which she said suffered from challenges of coordinating work between agencies and companies. The mission also underscored the importance of strong support from member states as it worked to overcome problems, she added. Another factor in EarthCARE’s delay was a change in launch vehicles. EarthCARE was originally slated to go on a Soyuz rocket but ESA lost access to the vehicle after Russia’s invasion of Ukraine. The mission was first moved to Europe’s Vega C, but ESA decided last June to launch it instead on a Falcon 9, citing delaying in returning that rocket to flight as well as modifications to the rocket’s payload fairing that would have been needed to accommodate EarthCARE. Technically, the shift in launch vehicles was not a major problem for the mission. “Throughout the changes in the launchers we did not have to change the design of the spacecraft,” said Bernaerts. He said that, during environmental tests, engineers put the spacecraft through conditions simulating different launch vehicles to prepare for the potential of changing vehicles. “From the moment we knew that Soyuz was not available, we have been looking at how stringently we could test the spacecraft to envelope other candidate launchers. That’s what we did and that worked out in the end.” EarthCARE is the second ESA-led mission to launch on a Falcon 9, after the Euclid space telescope last July. Another Falcon 9 will launch ESA’s Hera asteroid mission this fall. “We had a good experience with Euclid last year,” said Josef Aschbacher, ESA director general, in a post-launch interview. “Our teams and the SpaceX teams are working together very well.” The use of the Falcon 9 is a stopgap until Ariane 6 enters services, with a first launch now scheduled for the first half of July, and Vega C returns to flight at the end of the year. “I hear lots of questions about why we’re launching with Falcon and not with Ariane, and it’s really good to see the Ariane 6 inaugural flight coming closer,” he said. Those involved with the mission were simply happy to finally get the spacecraft into orbit. “There is a feeling of relief and happiness,” Cheli said after the launch. “This is an emotional roller coaster,” said Thorsten Fehr, EarthCARE mission scientist at ESA, on the agency webcast of the launch shortly after payload separation. “This is one of the greatest moments in my professional life ever.”
  11. Maker Ilia Ovsiannikov is working on a friendly do-it-yourself robot kit — and as part of that work has released a library to make it easier to use a range of lidar sensors in your Arduino sketches. "I have combined support for various spinning lidar/LDS sensors into an Arduino LDS library with a single platform API [Application Programming Interface]," Ovsiannikov explains. "You can install this library from the Arduino Library Manager GUI. Why support many lidar/LDS sensors? The reason is to make the hardware — supported by [the] Kaia.ai platform — affordable to as many prospective users as possible. Some of the sensors [suported] are sold as used robot vacuum cleaner spare parts and cost as low as $16 or so (including shipping)." The library delivers support for a broad range of lidar sensors from a unified API, Ovsiannikov explains, meaning it's not only possible to get started quickly but to switch sensors mid-project — should existing sensors become unavailable, or pricing shift to favor a different model. It also adds a few neat features of its own, including pulse-width modulation (PWM) control of lidar motors that lack their own control system, using optional adapter boards. While the library is usable standalone, and even to perform real-time angle and distance computation directly on an Arduino microcontroller, Ovsiannikov has also published a companion package to tie it in to the Robot Operating System 2 (ROS2). "[The] Kaia.ai robot firmware forwards LDS raw data — obtained from the Arduino LDS library — to a PC running ROS2 and micro-ROS," he explains. "The ROS2 PC kaiaai_telemetry package receives the raw LDS data, decodes that data and publishes it to the ROS2 /scan topic." More information on the library is available in Ovsiannikov's blog post, while the library itself is available on GitHub under the permissive Apache 2.0 license. more information: https://kaia.ai/blog/arduino-lidar-library/ https://github.com/kaiaai/LDS
  12. has been discuss in their forum https://www.agisoft.com/forum/index.php?topic=2420.0
  13. Traditionally, surveying relied heavily on manual measurements and ground-based techniques, which were time-consuming, labour-intensive and often limited in scope. With the somewhat recent emergence of photogrammetry, surveying professionals gained access to powerful tools that streamline processes, improve accuracy and unlock new possibilities in data collection and analysis. How videogrammetry works Videogrammetry is effectively an extension of photogrammetry; the mechanics are grounded in similar principles. Photogrammetry involves deriving accurate three-dimensional information from two-dimensional images by analysing their geometric properties and spatial relationships. The core of videogrammetry is to separate the video footage into images with regard to sufficient overlap and image quality. The subsequent workflow is almost the same as in photogrammetry. The quality of the output depends heavily on image resolution, frames per second (FPS) and stabilization. Integration with hardware and software Central to videogrammetry is the camera system used to capture video footage along with any supporting GPS devices used. Ground-based videogrammetry may utilize smartphone cameras or handheld digital cameras with video capabilities. While many smartphones and digital cameras have built-in GPS capabilities that geotag photographs with location data, their accuracy is at best 2.5m which is not always optimal for professional surveying. For this reason, when using a smartphone camera, many surveyors opt for an external real-time kinematic (RTK) antenna with 2cm accuracy. Attaching it to their smartphones enables them to receive correction data from a local NTRIP provider. This is a far more convenient option than placing and measuring ground control points (GCPs) on the terrain in combination with a GNSS device. Some of the well-known smartphone RTK products on the market today include REDCatch’s SmartphoneRTK, ArduSimple’s ZED-F9P RTK receiver, and Pix4D’s ViDoc RTK. It is important to note that while most smartphones can geotag photographs, not all can geotag video footage. Moreover, those smartphones that can geotag video can only geotag the first frame of the video. For this reason, users may require additional apps (e.g. PIX4Dcatch: 3D scanner) that can embed location data into the video’s metadata so that it can be used for surveying and mapping purposes. While non-geotagged videos can still be used for 3D model creation in various software solutions, it is recommended to opt for an RTK receiver as a minimum prerequisite for professional applications. At 3Dsurvey, utilization of Google’s Pixel 7a smartphone paired with REDcatch’s external RTK is the preferred option. Later this year, 3Dsurvey is set to release a ScanApp that it has developed to embed RTK correction data into video file metadata, enabling automatic georeferencing for videogrammetry projects. Examples of videogrammetry project approaches Non-geotagged (using any smartphone or camera) This can be ideal for the 3D documentation of cultural heritage projects. However, this approach lacks the spatial accuracy necessary for tasks such as outdoor mapping or infrastructure monitoring, which demand precise georeferencing. Accurately geotagged using external RTK (using a smartphone) Accurate geotagging using a smartphone equipped with an external RTK GNSS receiver ensures that the resulting 3D models maintain high spatial fidelity. Therefore, this approach is suitable for applications such as land surveying and small-scale construction monitoring projects where precise positioning is crucial. Examples include mapping manholes, pipes, low-level material piles, dig sites and cables for telecommunication or electricity. Within a larger photogrammetry/Lidar project For situations demanding the highest level of accuracy, videogrammetry can fill a gap or add another perspective to the aerial dataset obtained using other technologies, such as ground-level videogrammetry in combination with aerial Lidar (which lacks oblique views). Videogrammetry can also prove invaluable on site while drone mapping, such as when trees obstruct the flight path or if the project requires capturing details facing upwards. Similar to drone workflows, strategically placed and precisely measured GCPs can significantly improve the overall precision of the generated 3D model. Since videogrammetry usually involves capturing data from low angles, consider using AprilTag targets for superior oblique detection. Challenges and considerations Videogrammetry offers immense potential for various applications, yet its implementation comes with a set of challenges. Filming excessive footage can result in software inaccuracies, leading to duplicate surfaces in the 3D model. Therefore, it is important to carefully consider the path taken when filming. Some areas may be challenging to film, but if those areas are not captured in the video, they cannot be included when reconstructing the 3D model. Filming while navigating through obstacles, especially on construction sites, requires caution and precision on the part of the user. This sometimes gets in the way of creating a perfect video. Weather conditions such as puddles and sunlight can affect data accuracy by creating reflective surfaces and casting shadows, respectively. Filming areas obstructed by roofs, trees or walls can degrade the RTK signal, leading to inaccuracies in the final model. Tips for accurate data capture The quality of the output depends on a number of factors, including how the data is captured. The following basic principles can help users to obtain the necessary coordinates when filming so that the 3D model will be as realistic as possible: Move slowly and steadily: To obtain sharp images, maintain slow and smooth movements. This is especially crucial in poor light conditions when the shutter speed is low and video frames are more susceptible to blur. Rotate slowly and move while turning: Just like when towing a trailer, it is necessary to move back and forth rather than trying to turn on the spot. Don’t ‘wall paint’ when scanning vertical surfaces: Standing in one place while tilting the device up and down will generate a lot of images, but they will all have the same coordinates. Instead, move in a lateral direction while recording at different heights. Film in connected/closed loops: Try to ensure that the filming ends precisely back at the starting point. Advantages of videogrammetry Videogrammetry offers significant advantages in surveying, particularly when smartphones are leveraged as data capture devices. The portability and convenience of smartphones enables swift, efficient and accessible data collection in a wide range of situations, making it possible to document areas that are either small, rapidly changing, or require close-up details in hard-to-reach places. Moreover, unlike traditional methods requiring specialized equipment and expertise, smartphone videogrammetry empowers more professionals to capture and reconstruct 3D data. The standout feature of videogrammetry is that it eliminates overlap concerns, since the video is continuously shot at approximately 30FPS. This accessibility paves the way for even broader surveying applications. Integrating videogrammetry into a data collection toolkit promises to accelerate project timelines, streamline workflows and above all improve responsiveness, since a smartphone is always on site. This makes videogrammetry a cost-effective solution for surveying tasks. Videogrammetry in practice These three case studies illustrate the practical application of videogrammetry in various situations, ranging from a simple scenario to a mid-sized construction site. A Pixel smartphone with RTK and the 3Dsurvey ScanApp were used in all cases. 1. Pile volume calculation Photogrammetry is commonly used to calculate the volume of material piles, but the pre-flight preparation and planning can be time-consuming. Moreover, drones are bulky and less portable than a smartphone. Therefore, videogrammetry – with a handheld RTK antenna connected to a smart phone – can offer a much faster and simpler alternative. In this project to capture a small pile of material, the smartphone was simply held up high, tilted down and encircled the heap. During a five-minute site visit, a 75-second video was recorded, from which 152 frames were extracted. The processing time amounted to 30 minutes. 2. Preserving cultural heritage Archaeological excavation sites and statues that require 3D documentation are often located in crowded urban areas which may be subject to strict drone regulations. Some culturally significant items may be located indoors, such as in museums, where photogrammetry is challenging. Moreover, using ground surveying equipment like laser scanners requires highly technical knowledge. This can be an expensive option and therefore unsuitable for such projects. In a project to capture a complete and accurate 3D scan of a dragon statue, a total of 20 minutes were spent on site. 226 frames were extracted from 113 seconds of video. The subsequent processing time was one hour. 3. Underground infrastructure project Videogrammetry can be successfully used in the context of underground construction and engineering projects, such as when laying pipes into a trench. Compared with documenting the site with traditional equipment, using a smartphone equipped with RTK technology makes the process of documenting remarkably efficient. Just as with drone photogrammetry, multiple mappings can be performed to track progress. As another advantage, videogrammetry makes it possible to get really close and record details that may be hidden from the top-down aerial view. In support of a construction and engineering project for the installation of underground fuel tanks, videogrammetry was used to document and extract the exact measurements, monitor the width and the depth, calculate the volume and extract profile lines. A 15-minute site visit was sufficient to record 87 seconds of video, leading to 175 extracted frames. The processing time amounted to 45 minutes. Conclusion While it should be pointed out that it is not a replacement for established surveying techniques like photogrammetry and laser scans, videogrammetry has emerged as a valuable addition to the surveyor’s toolkit. Overall, this technology offers professionals significant gains in convenience, efficiency and flexibility, because surveyors can capture data using just a smartphone. This allows for faster and more accessible data capture across versatile situations, and provides cost-effective solutions adaptable to specific needs thanks to integrating seamlessly with existing surveying workflows. While filming with a smartphone currently has its limitations that present challenges, continuous advancements in hardware, software and best practices are steadily improving the accuracy and reliability of data collection. This ongoing evolution means that videogrammetry has the potential to contribute to better-informed decision-making and become an indispensable tool for the modern construction professional. source : gim-international
  14. Lurker

    Cluster polygons

    something like this? python - Polygon clustering - Geographic Information Systems Stack Exchange
  15. what version of qgis do you use right now?
  16. is it the error occured in all the historical data you've check?
  17. my previous post address your first question actually 😁 I saw that you already got the the equation for the band value and the depth, so it should be okay you directly apply it to get depth value
  18. Researchers at the University of Science and Technology of China (USTC) have developed a compact and lightweight single-photon LiDAR system that can be deployed in the air to generate high-resolution three-dimensional images with a low-power laser. The technology could be used for terrain mapping, environmental monitoring, and object identification, according to a press release. LiDAR, which stands for Light Detection And Ranging, is extensively used to determine geospatial information. The system uses light emitted by pulse lasers and measures the time taken by the reflected light to be received to determine the range, creating digital twins of objects and examining the surface of the earth. A common application of the system has been to help autonomous driving systems or airborne drones determine their environments. However, this requires an extended setup of LIDAR sensors, which is power-intensive. To minimize such sensors’ energy consumption, USTC researchers devised a single-photon lidar system and tested it in an airborne configuration. The single-photon lidar The single-photon lidar system is made possible by detection systems that can measure the small amounts of light given out by the laser when it is reflected. The researchers had to shrink the entire LiDAR system to develop it. It works like a regular LiDAR system when sending light pulses toward its targets. To capture the small amounts of light reflected, the team used highly sensitive detectors called single-photon avalanche diode (SPAD) arrays, which can detect single photons. To reduce the overall system size, the team also used small telescopes with an optical aperture of 47 mm as receiving optics. The time-of-flight of the photons makes it possible to determine the distance to the ground, and advanced computer algorithms help generate detailed three-dimensional images of the terrain from the sensor. “A key part of the new system is the special scanning mirrors that perform continuous fine scanning, capturing sub-pixel information of the ground targets,” said Feihu Xu, a member of the research team at USTC. “Also, a new photon-efficient computational algorithm extracts this sub-pixel information from a small number of raw photon detections, enabling the reconstruction of super-resolution 3D images despite the challenges posed by weak signals and strong solar noise.” Testing in real-world scenario To validate the new system, the researchers conducted daytime tests onboard a small airplane in Yiwu City, Zhejiang Province. In pre-flight ground tests, the LiDAR demonstrated a resolution of nearly six inches (15 cm) from nearly a mile (1.5 km). The team then implemented sub-pixel scanning and 3D deconvolution and found the resolution improved to 2.3 inches (six cm) from the same distance. “We were able to incorporate recent technology developments into a system that, in comparison to other state-of-the-art airborne LiDAR systems, employs the lowest laser power and the smallest optical aperture while still maintaining good performance in detection range and imaging resolution,” added Xu. The team is now working to improve the system’s performance and integration so that a small satellite can be equipped with such tech in the future. “Ultimately, our work has the potential to enhance our understanding of the world around us and contribute to a more sustainable and informed future for all,” Xu said in the press release. “For example, our system could be deployed on drones or small satellites to monitor changes in forest landscapes, such as deforestation or other impacts on forest health. It could also be used after earthquakes to generate 3D terrain maps that could help assess the extent of damage and guide rescue teams, potentially saving lives.”
  19. comparison of shallow water depth algorithm https://ejournal2.undip.ac.id/index.php/jkt/article/view/16050
  20. Multimodal machine learning models have been surging in popularity, marking a significant evolution in artificial intelligence (AI) research and development. These models, capable of processing and integrating data from multiple modalities such as text, images, and audio, are of great importance due to their ability to tackle complex real-world problems that traditional unimodal models struggle with. The fusion of diverse data types enables these models to extract richer insights, enhance decision-making processes, and ultimately drive innovation. Among the burgeoning applications of multimodal machine learning, Visual Question Answering (VQA) models have emerged as particularly noteworthy. VQA models possess the capability to comprehend both images and accompanying textual queries, providing answers or relevant information based on the content of the visual input. This capability opens up avenues for interactive systems, enabling users to engage with AI in a more intuitive and natural manner. However, despite their immense potential, the deployment of VQA models, especially in critical scenarios such as disaster recovery efforts, presents unique challenges. In situations where internet connectivity is unreliable or unavailable, deploying these models on tiny hardware platforms becomes essential. Yet the deep neural networks that power VQA models demand substantial computational resources, rendering traditional edge computing hardware solutions impractical. Inspired by optimizations that have enabled powerful unimodal models to run on tinyML hardware, a team led by researchers at the University of Maryland has developed a novel multimodal model called TinyVQA that allows extremely resource-limited hardware to run VQA models. Using some clever techniques, the researchers were able to compress the model to the point that it could run inferences in a few tens of milliseconds on a common low-power processor found onboard a drone. In spite of this substantial compression, the model was able to maintain acceptable levels of accuracy. To achieve this goal, the team first created a deep learning VQA model that is similar to other state of the art algorithms that have been previously described. This model was far too large to use for tinyML applications, but it contained a wealth of knowledge. Accordingly, the model was used as a teacher for a smaller student model. This practice, called knowledge distillation, captures much of the important associations found in the teacher model, and encodes them in a more compact form in the student model. In addition to having fewer layers and fewer parameters, the student model also made use of 8-bit quantization. This reduces both the memory footprint and the amount of computational resources that are required when running inferences. Another optimization involved swapping regular convolution layers out in favor of depthwise separable convolution layers — this further reduced model size while having a minimal impact on accuracy. Having designed and trained TinyVQA, the researchers evaluated it by using the FloodNet-VQA dataset. This dataset contains thousands of images of flooded areas captured by a drone after a major storm. Questions were asked about the images to determine how well the model understood the scenes. The teacher model, which weighs in at 479 megabytes, was found to have an accuracy of 81 percent. The much smaller TinyVQA model, only 339 kilobytes in size, achieved a very impressive 79.5 percent accuracy. Despite being over 1,000 times smaller, TinyVQA only lost 1.5 percent accuracy on average — not a bad trade-off at all! In a practical trial of the system, the model was deployed on the GAP8 microprocessor onboard a Crazyflie 2.0 drone. With inference times averaging 56 milliseconds on this platform, it was demonstrated that TinyVQA could realistically be used to assist first responders in emergency situations. And of course, many other opportunities to build autonomous, intelligent systems could also be enabled by this technology. source: hackster.io
  21. A new machine learning system can create height maps of urban environments from a single synthetic aperture radar (SAR) image, potentially accelerating disaster planning and response. Aerospace engineers at the University of the Bundeswehr in Munich claim their SAR2Height framework is the first to provide complete—if not perfect—three-dimensional city maps from a single SAR satellite. When an earthquake devastates a city, information can be in short supply. With basic services disrupted, it can difficult to assess how much damage occurred or where the need for humanitarian aid is greatest. Aerial surveys using laser ranging lidar systems provide the gold standard for 3D mapping, but such systems are expensive to buy and operate, even without the added logistical difficulties of a major disaster. Remote sensing is another option but optical satellite images are next to useless if the area is obscured by clouds or smoke. Synthetic aperture radar, on the other hand, works day or night, whatever the weather. SAR is an active sensor that uses the reflections of signals beamed from a satellite towards the Earth’s surface—the “synthetic aperture” part comes from the radar using the satellite’s own motion to mimic a larger antenna, to capture reflected signals with relatively long wavelengths. There are dozens of governmental and commercial SAR satellites orbiting the planet, and many can be tasked to image new locations in a matter of hours. However, SAR imagery is still inherently two-dimensional, and can be even trickier to interpret than photographs. This is partly due to an effect called radar layover where undamaged buildings appear to be toppling towards the sensor. “Height is a super complex topic in itself,” says Michael Schmitt, a professor at the University of the Bundeswehr. “There are a million definitions of what height is, and turning a satellite image into a meaningful height in a meaningful world geometry is a very complicated endeavor.” Schmitt and his colleague Michael Reclastarted by sourcing SAR images for 51 cities from the TerraSAR-X satellite, a partnership between the public German Aerospace Center and the private contractor Airbus Defence and Space. The researchers then obtained high quality height maps for the same cities, mostly generated by lidar surveys but some by planes or drones carrying stereo cameras. The next step was to make a one-to-one, pixel-to-pixel mapping between the height maps and the SAR images on which they could train a deep neural network. The results were amazing, says Schmitt. “We trained our model purely on TerraSAR-X imagery but out of the box, it works quite well on imagery from other commercial satellites.” He says the model, which takes only minutes to run, can predict the height of buildings in SAR images with an accuracy of around three meters—the height of a single story in a typical building. That means the system should be able to spot almost every building across a city that has suffered significant damage. Pietro Milillo, a professor of geosensing systems engineering at the University of Houston, hopes to use Schmitt and Recla’s model in an ongoing NASA-funded project on earthquake recovery. “We can go from a map of building heights to a map of probability of collapse of buildings,” he says. Later this month, Milillo intends to validate his application by visiting the site of an earthquake in Morocco last year that killed over 2,900 people. But the AI model is still far from perfect, warns Schmitt. It struggles to accurately predict the height of skyscrapers and is biased towards North American and European cities. This is because many cities in developing nations did not have regular lidar mapping flights to provide representational training data. The longer the gap between the lidar flight and the SAR images, the more buildings would have been built or replaced, and the less reliable the model’s predictions. Even in richer countries, “we’re really dependent on the slow revisit cycles of governments flying lidar missions and making the data publicly available,” says Carl Pucci, founder of EO59, a Virginia Beach, Va.-based company specializing in SAR software. “It just sucks. Being able to produce 3D from SAR alone would be really be a revolution.” Schmitt says the SAR2Height model now incorporates data from 177 cities and is getting better all time. “We are very close to reconstructing actual building models from single SAR images,” he says. “But you have to keep in mind that our method will never be as accurate as classic stereo or lidar. It will always remain a form of best guess instead of high-precision measurement.” source: ieee
  22. Satellite images analyzed by AI are emerging as a new tool in finding unmapped roads that bring environmental destruction to wilderness areas. James Cook University's Distinguished Professor Bill Laurance was co-author of a study analyzing the reliability of an automated approach to large-scale road mapping, using convolutional neural networks trained on road data, using satellite images. He said the Earth is experiencing an unprecedented wave of road building, with some 25 million kilometers of new paved roads expected by mid-century. "Roughly 90% of all road construction is occurring in developing nations including many tropical and subtropical regions of exceptional biodiversity. "By sharply increasing access to formerly remote natural areas, poorly regulated road development triggers dramatic increases in environmental disruption due to activities such as logging, mining and land clearing," said Professor Laurance. He said many roads in such regions, both legal and illegal, are unmapped, with road-mapping studies in the Brazilian Amazon, Asia-Pacific and elsewhere regularly finding up to 13 times more road length than reported in government or road databases. "Traditionally, road mapping meant tracing road features by hand, using satellite imagery. This is incredibly slow, making it almost impossible to stay on top of the global road tsunami," said Professor Laurance. The researchers trained three machine-learning models to automatically map road features from high-resolution satellite imagery covering rural, generally remote and often forested areas of Papua New Guinea, Indonesia and Malaysia. "This study shows the remarkable potential of AI for large-scale tasks like global road-mapping. We're not there yet, but we're making good progress," said Professor Laurance. "Proliferating roads are probably the most important direct threat to tropical forests globally. In a few more years, AI might give us the means to map and monitor roads across the world's most environmentally critical areas." journal: https://www.mdpi.com/2072-4292/16/5/839
  23. The European Space Agency (ESA) has greenlit the development of the NanoMagSat constellation, marking a significant advancement in the use of small satellites for scientific missions. NanoMagSat, a flagship mission spearheaded by Open Cosmos together with IPGP (Université Paris Cité, Institut de physique du globe de Paris, CNRS) and CEA-Léti, aims to revolutionise our understanding of Earth's magnetic field and ionospheric environment. As a follow on from ESA's successful Earth Explorer Swarm mission, NanoMagSat will use a constellation of three 16U satellites equipped with state-of-the-art instruments to monitor magnetic fields and ionospheric phenomena. This mission is joining the Scout family, a programme from ESA to deliver scientific small satellite missions within a budget of less than €35 million. The decision to proceed with NanoMagSat follows the successful completion of Risk Retirement Activities including the development of a 3m-long deployable boom and a satellite platform with exceptional magnetic cleanliness, key to ensuring state-of-the art magnetic accuracy. ESA’s Director of Earth Observation Programmes, Simonetta Cheli, said of this news: “We are very pleased to add two new Scouts to our Earth observation mission portfolio. These small science missions perfectly complement our more traditional existing and future Earth Explorer missions, and will bring exciting benefits to Earth.”
  24. Leica Geosystems, part of Hexagon, introduces the Leica TerrainMapper-3 airborne LiDAR sensor, featuring new scan pattern configurability to support the widest variety of applications and requirements in a single system. Building upon Leica Geosystems’ legacy of LiDAR efficiency, the TerrainMapper-3 provides three scan patterns for superior productivity and to customise the sensor’s performance to specific applications. Circle scan patterns enhance 3D modelling of urban areas or steep terrains, while ellipse scan patterns optimise data capture for more traditional mapping applications. Skew ellipse scan patterns improve point density for infrastructures and corridor mapping applications. The sensor’s higher scan speed rate allows customers to fly the aircraft faster while maintaining the highest data quality, and the 60-degrees adjustable field of view maximises data collection with fewer flight lines. The TerrainMapper-3 is further complemented by the Leica MFC150 4-band camera, operating with the same 60-degree field of view coverage as the LiDAR for exact data consistency. Thanks to reduced beam divergence, the TerrainMapper-3 provides improved planimetric accuracy, while new MPiA (Multiple Pulses in Air) handling guarantees more consistent data acquisition, even in steep terrain, providing users with unparalleled reliability and precision. The new system introduces possibilities for real-time full waveform recording at maximum pulse rate, opening new opportunities for advanced and automated point classification. The TerrainMapper-3 seamlessly integrates with Leica HxMap end-to-end processing workflow, supporting users from mission planning to product generation to extract the greatest value from the collected data.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist