Jump to content

Lurker

Moderators
  • Posts

    5,603
  • Joined

  • Last visited

  • Days Won

    895

Everything posted by Lurker

  1. Sometimes you need to create a satellite navigation tracking device that communicates via a low-power mesh network. [Powerfeatherdev] was in just that situation, and they whipped up a particularly compact solution to do the job. As you might have guessed based on the name of its creator, this build is based around the ESP32-S3 PowerFeather board. The PowerFeather has the benefit of robust power management features, which makes it perfect for a power-sipping project that’s intended to run for a long time. It can even run on solar power and manage battery levels if so desired. The GPS and LoRa gear is all mounted on a secondary “wing” PCB that slots directly on to the PowerFeather like a Arduino shield or Raspberry Pi HAT. The whole assembly is barely larger than a AA battery. It’s basically a super-small GPS tracker that transmits over LoRa, while being optimized for maximum run time on limited power from a small lithium-ion cell. If you’re needing to do some long-duration, low-power tracking task for a project, this might be right up your alley. https://hackaday.com/2024/10/17/tiny-lora-gps-node-relies-on-esp32/
  2. Multiple motors or servos are the norm for drones to achieve controllable flight, but a team from MARS LAB HKU was able to a 360° lidar scanning drone with full control on just a single motor and no additional actuators. Video after the break. The key to controllable flight is the swashplateless propeller design that we’ve seen a few times, but it always required a second propeller to counteract self-rotation. In this case, the team was able to make that self-rotation work so that they could achieve 360° scanning with a single fixed LIDAR sensor. Self-rotation still needs to be slowed, so this was done with four stationary vanes. The single rotor also means better efficiency compared to a multi-rotor with similar propeller disk area. The LIDAR comprises a full 50% of the drone’s weight and provides a conical FOV out to a range of 450m. All processing happens onboard the drone, with point cloud data being processed by a LIDAR-inertial odometry framework. This allows the drone to track and plan its flight path while also building a 3D map of an unknown environment. This means it would be extremely useful for indoor or underground environments where GPS or other positioning systems are not available. All the design files and code for the drone are up on GitHub, and most of the electronic components are off-the-shelf. This means you can build your own, and the expensive lidar sensor is not required to get it flying. This seems like a great platform for further experimentation, and getting usable video from a normal camera would be an interesting challenge. Single Rotor Drone Spins For 360 Lidar Scanning | Hackaday
  3. The fall update to Global Mapper includes numerous usability updates, processing improvements, and with Pro, beta access to the Global Mapper Insight and Learning Engine which contains deep learning-based image analysis tools. Global Mapper is a complete geospatial software solution. The Standard version excels at basic vector, raster, and terrain editing, with Global Mapper Pro expanding the toolset to support drone-collected image processing, point cloud classification and extraction, and many more advanced image and terrain analysis options. Version 26.0 of Global Mapper Standard focuses on ease-of-use updates to improve the experience and efficiency of the software. A Global Search acts as a toolbox to locate any tool within the program, and a source search in the online data streaming tool makes it easier to bring online data into the application. Updates for working with 3D data include construction site planning to keep all edited terrain for a flattened site within a selected area and the ability to finely adjust the vertex position of 3D lines in reference to terrain in the Path Profile tool. Perhaps the largest addition to Global Mapper Pro v26.0 is the availability of the new Insight and Learning Engine which provides deep learning-based image analysis. Available with Global Mapper Pro for a limited time for users to test and explore, users can leverage built-in models for building extraction, vehicle detection, or land cover classification. These models can even be fine-tuned with iterative training to optimize the analysis for the data area.
  4. Responding to the escalating threats from climate change, biodiversity loss, pollution and extreme weather and the need to take action to address these threats, this forward-looking strategy outlines a bold vision for Earth science through to 2040. By leveraging advanced satellite-based monitoring of our planet, ESA aims to provide critical data and knowledge to guide action and policy for a more sustainable future. ESA’s Director of Earth Observation Programmes, Simonetta Cheli, said, “As a space agency, it is our duty to harness the unique power of Earth observing technology to inform the critical decisions that will shape our future. “Our new Earth Observation Science Strategy underscores a science-first approach where satellite technology provides data that contribute to our collective understanding of the Earth system as a whole, so that solutions can be found to address global environmental challenges.” “The choices we make today help create a more sustainable world and propel the transformation towards a resilient, thriving global society.” The new Science Strategy presents a bold and ambitious vision for the future of ESA’s Earth Observation Programmes. It shifts focus towards understanding the feedbacks and interconnections within the Earth system, rather than targeting specific Earth system domains.
  5. You're a hotshot working to contain a wildfire. The conflagration jumps the fire line, forcing your crew to flee using pre-determined escape routes. At the start of the day, the crew boss estimated how long it should take to get to the safety zone. With the flames at your back, you check your watch and hope they were right. Firefighters mostly rely on life-long experience and ground-level information to choose evacuation routes, with little support from digital mapping or aerial data. The tools that do exist tend to consider only a landscape's steepness when estimating the time it takes to traverse across terrain. However, running up a steep road may be quicker than navigating a flat boulder field or bushwacking through chest-high shrubs. Firefighters, disaster responders, rural health care workers and professionals in myriad other fields need a tool that incorporates all aspects of a landscape's structure to estimate travel times. In a new study, researchers from the University of Utah introduced Simulating Travel Rates in Diverse Environments (STRIDE), the first model that incorporates ground roughness and vegetation density, in addition to slope steepness, to predict walking travel times with unprecedented accuracy. "One of the fundamental questions in firefighter safety is mobility. If I'm in the middle of the woods and need to get out of here, what is the best way to go and how long will it take me?" said Mickey Campbell, research assistant professor in the School of Environment, Society and Sustainability (ESS) at the U and lead author of the study. The authors analyzed airborne Light Detection and Ranging (LiDAR) data and conducted field trials to develop a remarkably simple, accurate equation that identifies the most efficient routes between any two locations in wide-ranging settings, from paved, urban environments to off-trail, forested landscapes. They found that STRIDE consistently chose routes resembling paths that a person would logically seek out—a preference for roads and trails and paths of least resistance. STRIDE also produced much more accurate travel times than the standard slope-only models that severely underestimated travel time. "If the fire reaches a firefighter before they reach safety, the results can be deadly, as has happened in tragedies such as the 2013 Yarnell Hill fire," said Campbell. "STRIDE has the potential to not only improve firefighter evacuation but also better our understanding of pedestrian mobility across disciplines from defense to archaeology, disaster response and outdoor recreation planning." Airborne estimates of on-the-ground travel STRIDE is the first comprehensive model to use airborne LiDAR data to map two underappreciated factors that inhibit off-road travel—vegetation density and ground surface roughness—as well as steepness. LiDAR is commonly used to map the structure of a landscape from the air, Campbell explained. A LiDAR-equipped plane has sensors that shoot millions of laser pulses in all directions, which bounce back and paint a detailed map of structures on the ground. The laser pulses bounce off leaf litter, gravel, boulders, shrubs and tree canopies to build three-dimensional maps of terrain and vegetation with centimeter-level precision. The authors compared STRIDE performance against travel rates gleaned from three field experiments, in which volunteers walked along 100-meter-transects through areas with existing LiDAR data. "Getting travel times from a variety of volunteers allowed us to account for a range of human performance so we can make the most accurate predictions of travel rates in a diversity of environments," said co-author Philip Dennison, professor and director of ESS. The first field trials were in September of 2016. At the time, LiDAR datasets were relatively rare in the western U.S. Over the last decade, the U.S. Geological Society has developed LiDAR maps covering most of the country. "When we first started looking into wildland firefighter-mobility a decade ago, there were lots of people studying how fire spreads across the landscape, but very few people were working on the problem of how firefighters move across the landscape," said Campbell, then a doctoral student in Dennison's lab at ESS. "Only by combining these two pieces of information can we truly understand how to improve firefighter safety." That study, published in 2017, was the first attempt to map escape routes for wildland firefighters using LiDAR. The second trial took place in August of 2023 in the central Wasatch Mountains of Utah to capture a wider set of undeveloped, off-path landscape conditions than did the first experiment, including nearly impassibly steep slopes and extremely dense vegetation. The final experiment was in January of 2024 in Salt Lake City to test the STRIDE model in an urban environment. In total, about 50 volunteers walked more than 40 100-meter transects of highly varied terrain. Putting it together The study compared STRIDE against a slope-only model to generate the most efficient routes, or the least-cost paths, in the mountains surrounding Alta Ski Resort in the Wasatch Mountains, Utah. Geographers and archaeologists have been using least-cost path modeling to simulate human movement for decades; however, to date most have relied almost exclusively on slope as the sole landscape impediment. The authors imagined a scenario in which emergency responders are planning to rescue an injured hiker. From a central point, they chose 1,000 random locations for the hiker and asked both models to find the least-cost path. STRIDE chose established roads around the ski areas, followed trails and in some cases major ski slopes, to avoid patches of forest or dense vegetation. STRIDE reused established paths as long as possible before branching off, reinforcing the idea that STRIDE identified the routes most intuitive for somebody on the ground. "The really cool thing is that we didn't supply the algorithm with any knowledge of existing transportation networks. It just knew to take the roads because they're smoother, not vegetated and tend to be less steep," said Campbell. In contrast, the slope-only model had few overlapping pathways, with little regard for roads or trails. It sent rescuers through dense vegetation, dangerous scree fields and forested areas. The authors believe that STRIDE will have an immediate impact in the real world—they've made the STRIDE model publicly available so that anyone with LiDAR data and gumption can make their work or recreation more efficient, with a higher safety margin. "If you don't consider the vegetation cover and ground-surface material, you're going to significantly underestimate your total travel time. The U.S. Forest Service has been really supportive of this travel rate research because they recognize the inherent value of understanding firefighter mobility," said Campbell. "That's what I love about this work. It's not just an academic exercise, but it's something that has real, tangible implications for firefighters and for professionals in so many other fields." The authors recently used a slope-based travel rate model to update the U.S. Forest Service Ground Evacuation Time (GET) layer, which allows wildland firefighters to estimate travel time to the nearest medical facility from any location in the contiguous U.S. Campbell hopes to use STRIDE to improve GET, allowing for more accurate estimates of evacuation times. links: https://www.nature.com/articles/s41598-024-71359-6
  6. A new U.S. government report highlights mixed progress in the modernization of the Global Positioning System (GPS), citing advancements in satellite and ground equipment upgrades alongside persistent delays in some areas. The Government Accountability Office (GAO) report, released Sept. 9, reveals that the Space Force is grappling with technical hurdles in next-generation GPS satellites and ground systems. These challenges have eroded schedule margins, potentially pushing back the delivery of 24 M-code-capable satellites crucial for military operations through the 2030s. M-code, a more secure and jam-resistant signal, is central to the modernization efforts. The ground control segment known as OCX, while achieving some key testing milestones, still requires further evaluation before military acceptance. The projected acceptance date is now set for December 2025. The report also flags risks in the development of user equipment, including microchips and cards that process M-code signals. Although the first increment of user equipment is approaching final tests, newly discovered deficiencies threaten to disrupt the timeline. The Department of Defense is additionally working to address potential shortages of GPS chips and cards. Lockheed Martin, the prime contractor for the next-generation GPS IIIF satellites, is tackling manufacturing difficulties with a crucial component, the Linearized Traveling Wave Tube Amplifier, the report says. This component is essential for enabling a high-powered, steerable M-code signal. To mitigate these challenges, Lockheed Martin has subcontracted the construction of amplifiers from the third GPS IIIF satellite onward. The OCX program, led by Raytheon, completed a qualification test for Blocks 1 and 2 in December 2023. However, several test events remain before the system can be accepted for operational use. The related OCX Block 3F program has made progress in software development, but ongoing delays with earlier blocks have complicated efforts. This annual assessment — mandated by Congress in the 2016 National Defense Authorization Act — requires GAO to evaluate the cost, schedule, and performance of GPS acquisition programs. The report underscores the complexity and ongoing challenges in modernizing this critical global navigation infrastructure.
  7. Satellite communications company OneWeb unveiled a new positioning, navigation, and timing (PNT) service amid global concerns about GPS vulnerability to jamming and interference in critical sectors such as defense, aviation and emergency services. The service, called Astra, aims to ensure uninterrupted communications for OneWeb’s satellite broadband customers, even when GPS or other global navigation satellite system (GNSS) signals are unavailable or compromised. The system utilizes a software-defined outdoor receiver capable of accessing PNT signals from both GNSS and alternative PNT broadcast services such as Iridium satellites. Upon identifying an alternative PNT source, Astra generates an output signal compatible with the standard GPS L1 interface, the company said. The service offers different versions for the U.S. government and for allied governments. Kevin Steen, President and CEO of Eutelsat America Corp. and OneWeb Technologies, said Astra is a “game-changer for defense users operating in difficult environments.”
  8. great news, but I hope the future is brighter if they decide to make it free, coz some software went downhill after set to free. btw, news page is here; https://www.clarku.edu/centers/geospatial-analytics/2024/08/27/announcement-terrset-liberagis/
  9. The Role of a GIS Portfolio: More Than Just a Resume A resume provides a snapshot of your education, skills, and experience, but a GIS portfolio offers a deeper dive into what you can actually do. It's the difference between telling and showing. While a resume might list "proficiency in ArcGIS" as a skill, a portfolio can demonstrate this proficiency through detailed examples of projects you've completed, maps you've created, and problems you've solved using GIS technology. Your GIS portfolio should include a variety of work samples that highlight your capabilities across different areas of GIS. This might include: Maps and Visualizations: High-quality maps that demonstrate your ability to analyze spatial data and present it in a clear, compelling manner. Project Descriptions: Detailed write-ups of the projects you've worked on, including the challenges you faced, the solutions you implemented, and the impact of your work. Data Analysis: Examples of your ability to analyze and interpret spatial data, using tools such as ArcGIS, or other GIS software. Programming and Automation: If applicable, include scripts or code snippets that show your ability to automate GIS tasks or perform advanced spatial analysis. By including these elements, your portfolio becomes a powerful tool that not only highlights your technical skills but also tells the story of your professional journey in GIS. Building Your Portfolio: A Step-by-Step Guide Creating a GIS portfolio might seem daunting, especially if you're early in your career and don't yet have a wealth of experience to draw from. However, with a strategic approach, you can build a portfolio that effectively showcases your potential. 1) Start with What You Have Don't wait until you've accumulated years of experience before you start building your portfolio. Start with the projects you've completed during your education or any internships you've done. Even classroom assignments can be valuable portfolio pieces if they demonstrate your skills and your ability to solve real-world problems. 2) Choose a Platform Your GIS portfolio needs a home, and there are several platforms you can use to create it. Websites like GitHub, Behance, or even a personal website can serve as a platform for your portfolio. Esri’s ArcGIS StoryMaps, ArcGIS Experience Builder, or ArcGIS Hub are excellent tools that allows you to create interactive, visually compelling narratives that showcase your work. 3) Showcase a Variety of Skills When selecting projects for your portfolio, aim for diversity. Include projects that demonstrate your proficiency with different GIS tools and techniques, from spatial analysis and geocoding to data visualization and programming. This not only shows potential employers the breadth of your skills but also your adaptability in different areas of GIS. 4) Provide Context A map or a data visualization on its own might look impressive, but without context, it's just a pretty picture. For each project in your portfolio, provide a brief description that explains the problem you were trying to solve, the methods you used, and the results you achieved. This context is crucial for helping potential employers understand the impact of your work. 5) Keep It Updated Your portfolio should be a living document that evolves as your career progresses. Make it a habit to update your portfolio regularly with new projects and skills. This not only keeps your portfolio fresh but also serves as a reminder of your growth and accomplishments in the field. Leveraging Your Portfolio: How to Use It Effectively Once you've built your GIS portfolio, the next step is to leverage it in your job search and career development. Here are some strategies for making the most of your portfolio: 1) Use It in Job Applications When applying for GIS positions, include a link to your portfolio in your resume and cover letter. This allows potential employers to see firsthand what you can do, rather than just reading about it. 2) Bring It to Interviews In an interview, your portfolio can be a powerful tool for demonstrating your skills and experience. Consider bringing a tablet or laptop to the interview so you can walk the interviewer through your portfolio and discuss the projects in detail. 3) Share It on Professional Networks Platforms like LinkedIn are great for sharing your portfolio with a wider audience. Post updates about new projects you’ve added to your portfolio and include a link to your portfolio in your LinkedIn profile. This increases your visibility and can attract potential employers or collaborators. 4) Use It for Networking When networking at conferences or industry events, your portfolio can serve as a conversation starter. Whether you’re talking to potential employers or peers in the industry, being able to show them your work can leave a lasting impression. In the competitive and ever-evolving field of GIS, having a well-crafted portfolio is not just an option—it’s a necessity. A strong GIS portfolio serves as a powerful tool for showcasing your skills, telling your professional story, and navigating your career path. Whether you’re just starting out or looking to make a career transition, your portfolio can help you stand out, demonstrate your value, and open doors to new opportunities in the geospatial industry.
  10. The summer holidays are ending, which for many concludes with a long drive home and reliance on GPS devices to get safely home. But every now and then, GPS devices can suggest strange directions or get briefly confused about your location. But until now, no one knew for sure when the satellites were in a good enough position for the GPS system to give reliable direction. TU/e's Mireille Boutin and her co-worker Gregor Kemper at the Technical University of Munich have turned to mathematics to help determine when your GPS system has enough information to determine your location accurately. The research is published in the journal Advances in Applied Mathematics. "In 200 meters, turn right." This is a typical instruction that many have heard from their global positioning system (GPS). Without a doubt, advancements in GPS technologies and mobile navigation apps have helped GPS play a major role in modern car journeys. But, strictly adhering to instructions from GPS devices can lead to undesirable situations. Less serious might be turning left instead of right, while more serious could be driving your car into a harbor—just as two tourists did in Hawaii in 2023. The latter incident is very much an exception to the rule, and one might wonder: "How often does this happen and why?" GPS and your visibility "The core of the GPS system was developed in the mid-1960s. At the time, the theory behind it did not provide any guarantee that the location given would be correct," says Boutin, professor at the Department of Mathematics and Computer Science. It won't come as a surprise then to learn that calculating an object's position on Earth relies on some nifty mathematics. And they haven't changed much since the early days. These are at the core of the GPS system we all use. And it deserved an update. So, along with her colleague Gregor Kemper at the Technical University of Munich, Boutin turned to mathematics to expand on the theory behind the GPS system, and their finding has recently been published in the journal Advances in Applied Mathematics. How does GPS work? Before revealing Boutin and Kemper's big finding, just how does GPS work? Global positioning is all about determining the position of a device on Earth using signals sent by satellites. A signal sent by a satellite carries two key pieces of information—the position of the satellite in space and the time at which the position was sent by the satellite. By the way, the time is recorded by a very precise clock on board the satellite, which is usually an atomic clock. Thanks to the atomic clock, satellites send very accurate times, but the big issue lies with the accuracy of the clock in the user's device—whether it's a GPS navigation device, a smartphone, or a running watch. "In effect, GPS combines precise and imprecise information to figure out where a device is located," says Boutin. "GPS might be widely used, but we could not find any theoretical basis to guarantee that the position obtained from the satellite signals is unique and accurate." Google says 'four' If you do a quick Google search for the minimum number of satellites needed for navigation with GPS, multiple sources report that you need at least four satellites. But the question is not just how many satellites you can see, but also what arrangements can they form? For some arrangements, determining the user position is impossible. But what arrangements exactly? That's what the researchers wanted to find out. "We found conjectures in scientific papers that seem to be widely accepted, but we could not find any rigorous argument to support them anywhere. Therefore, we thought that, as mathematicians, we might be able to fill that knowledge gap," Boutin says. To solve the problem, Boutin and Kemper simplified the GPS problem to what works best in practice: equations that are linear in terms of the unknown variables. "A set of linear equations is the simplest form of equations we could hope for. To be honest, we were surprised that this simple set of linear equations for the GPS problem wasn't already known," Boutin adds. The problem of uniqueness With their linear equations ready, Boutin and Kemper then looked closely at the solutions to the equations, paying special attention as to whether the equations gave a unique solution. "A unique solution implies that the only solution to the equations is the actual position of the user," notes Boutin. If there is more than one solution to the equations, then only one is correct—that is, the true user position—but the GPS system would not know which one to pick and might return the wrong one. The researchers found that nonunique solutions can emerge when the satellites lie in a special structure known as a "hyperboloid of revolution of two sheets." "It doesn't matter how many satellites send a signal—if they all lie on one of these hyperboloids then it's possible that the equations can have two solutions, so the one chosen by the GPS could be wrong," says Boutin. But what about the claim that you need at least four satellites to determine your position? "Having four satellites can work, but the solution is not always unique," points out Boutin. Why mathematics matters For Boutin, this work demonstrates the power and application of mathematics. "I personally love the fact that mathematics is a very powerful tool with lots of practical applications," says Boutin. "I think people who are not mathematicians may not see the connections so easily, and so it is always nice to find clear and compelling examples of everyday problems where mathematics can make a difference." Central to Boutin and Kemper's research is the field of algebraic geometry in which abstract algebraic methods are used to solve geometrical, real-world problems. "Algebraic geometry is an area of mathematics that is considered very abstract. I find it nice to be reminded that any piece of mathematics, however abstract it might be, may turn out to have practical applications at some point," says Boutin.
  11. At least 30 people have been killed following the collapse of a dam in Sudan's northwest Red Sea State, according to the United Nations's emergency relief agency. Hundreds more are believed missing, Reuters reported. Flash flooding decimated 20 villages and damaged a further 50 after the Arba'at Dam collapsed Sunday, the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) said. It estimated 50,000 people had been "severely affected" by the disaster. In the villages of Khor-Baraka and Tukar, residents were reportedly forced to flee for safety, OCHA also said, citing local officials. It added that the final death toll could rise significantly. Agence France-Presse (AFP) footage of the aftermath shows industrial trucks buried in mud and debris, some laden with crates and personal belongings. Other vehicles are almost unrecognizable on the silty riverbank. One resident who lived near the dam, Moussa Mohamad Moussa, described in another video from AFP how "the dam broke and… the water swept away around 40 people." "In the area where I'm from, the Tabub area… they told me that all the houses and everything was swept away," he said. Local media report that the dam burst on Saturday night following heavy rains, but exact details have been difficult to gather due to mobile network outages. Arbaat, 40km (25 miles) north of Port Sudan, is part of Sudan’s system of dams that help manage floodwaters and is where the two upper branches of the river Nile meet in Sudan. The country has been dealing with heavy rainfall and floods since the end of June, with the UN Office for the Coordination of Humanitarian Affairs (OCHA) saying that the harsh weather has affected an estimated 317,000 people (56,450 families) across 16 states. The ministry of health said on Monday that the death toll from flooding across the country had risen to 132. The most affected states include North Darfur, the River Nile, and West Darfur, OCHA reported.
  12. Bayanat, a leading provider of AI-powered geospatial solutions and a subsidiary of G42, confirmed that the launch of the Synthetic Aperture Radar (SAR) satellite, titled "Foresight-1", is a significant achievement that reinforces the UAE's global leadership in the space sector, as it is the first satellite of the UAE's Earth Observation Space Programme. Hasan Al Hosani, Managing Director of Bayanat, told the Emirates News Agency (WAM) that Foresight-1 places the UAE among the prestigious list of 20 countries around the world that operate SAR space assets, which strengthens its position in the space sector and supports its growing capabilities in this field. He pointed out that the strategic roadmap drawn up by Bayanat and Al Yah Satellite Communications Company (Yahsat), is based on deploying a constellation of satellites with SAR technology in the near future. He explained that since the initial announcement of the launch of the Earth Observation Space Programme in 2023, the two companies have been implementing the strategic plan for the Earth Observation System, starting with the Foresight-1 satellite. He added, "After the successful launch of the Foresight-1 satellite, we are now able to operate space assets prepared to cross over the Middle East region repeatedly and in record time." He stated that what distinguishes the Foresight-1 satellite is that it provides continuous, high-resolution monitoring solutions, using SAR technology, which is an active sensing system that illuminates the Earth's surface and measures the reflected signal to provide high-resolution images. Unlike traditional optical imaging satellites, SAR satellites can capture images day or night regardless of weather conditions or the reflection of sunlight. He said that this technology will enhance the quality of geospatial solutions and services provided by Bayanat and Yahsat, in addition to enhancing capabilities in disaster management, marine monitoring, and smart mobility applications. He pointed out that Emirati citizens constituted more than 30 per cent of the Earth Observation Space Programme, reflecting the commitment to developing highly qualified national cadres in one of the most vital sectors. The merger between Bayanat and Yahsat is expected to be completed before the end of this year, subject to obtaining final approvals from regulatory authorities in the UAE and internationally. He pointed out that the merger contributes to establishing "Space42" as a leading Emirati company in the space sector with a global footprint, supporting the country's efforts to achieve the directions of the National Space Strategy 2030. Al Hosani stressed that Space42 will continue on the same path to achieve the goals of the National Space Strategy 2030, and support the country's efforts to develop its space capabilities, support national security, enhance innovation, encourage international cooperation, drive economic growth, and enhance urban development through space technology.
  13. In preparation for liftoff on 4 September 2024 (3 September Kourou time), the Vega–Sentinel-2C upper-composite has been hoisted into the launch tower at Europe’s Spaceport. The Sentinel-2 mission is based on a constellation of two identical satellites, Sentinel-2A (launched in 2015) and Sentinel-2B (launched in 2017), flying in the same orbit but 180° apart to optimise coverage and revisit time. Each satellite carries a high-resolution multispectral imager to deliver optical images from the visible to the shortwave-infrared region of the electromagnetic spectrum. From the altitude of 786 km, the satellites provide images in 13 spectral bands with resolutions of 10, 20 and 60 m over a large swath width of 290 km. Data collected from Sentinel-2 are used for a wide range of applications, including precision farming, water quality monitoring, natural disaster management and methane emission detection. Sentinel-2C launches on Vega, Europe’s nimble rocket specialising in launching small scientific and Earth observation spacecraft such as to sun-synchronous polar orbits, following the Sun. At 30 m tall, Vega weighs 137 tonnes on the launch pad and reaches orbit with three solid-propellant powered stages before the fourth liquid-propellant stage takes over for precise placement of Sentinel-2C into its orbit. By rocket standards Vega is light-weight and powerful, the first three stages burn through their fuel and bring Vega and its satellite to space in just seven minutes. Once in orbit, Sentinel-2C will replace its predecessor, Sentinel-2A, while Sentinel-2D will later replace Sentinel-2B. ESA - Sentinel-2C in the Vega launch tower
  14. Images gathered by the UK military’s first satellite will be shared with allies, the Ministry of Defence (MoD) has said. The department said the war in Ukraine had shown that the use of space is “crucial” to military operations. The satellite, named Tyche, was launched on Friday from a rocket owned by SpaceX, the company co-founded by technology entrepreneur and billionaire Elon Musk. Along with military information, it is intended that data from the satellite will be accessible by other UK Government departments for uses including environmental disaster monitoring, mapping information development and tracking the impact of climate change globally, according to the MoD Tyche, which is comparable in size to a washing machine, was designed and built in the UK through a £22 million contract awarded to Surrey Satellites Technology Limited (SSTL) and is the first satellite to be fully owned by the MoD. SSTL received the first signals from Tyche a few hours after lift-off that confirmed the successful launch from Vandenberg space force base, California, on a SpaceX Falcon 9 rocket as part of the Transporter 11 mission. Over a five-year life span, the 150kg satellite will provide imagery to support the UK armed forces and is the first to be launched by the MoD out of a constellation of satellites under its space-based Intelligence, Surveillance and Reconnaissance (ISR) programme. Maria Eagle, minister for defence procurement and industry, said: “Tyche will provide essential intelligence for military operations as well as supporting wider tasks across government. “Tyche also shows the UK’s commitment to support innovation in science and technology, stimulating growth across the sector and supporting highly-skilled jobs in the UK.” The MoD said the design and build of Tyche had supported about 100 high-skilled roles at SSTL since 2022. UK Space Commander Major General Paul Tedman said: “This is a fabulous day for UK space. “The successful launch of Tyche has shown that UK Space Command, and its essential partners across defence and industry, can rapidly take a concept through to the delivery of a satellite capability on orbit. “Tyche represents the first of a future constellation of Intelligence, Surveillance and Reconnaissance satellites that we’ll launch over the coming years. “I’d like to take this opportunity to congratulate everybody involved with Tyche and thank them for their support.” Defence equipment and support space team leader Paul Russell described the project as an “exciting journey”. He said: “To see Tyche – the first of a new generation of UK military capabilities – delivered into orbit is an incredibly proud moment and a tribute to everyone’s commitment to this key project.”
  15. Everything from drones to airplanes, ships, and cars are equipped with GPS units to help them navigate around the world. This information is crucial not only for powering autonomous navigation systems, but also for supplying human operators with the information they need to get where they are going. But while this technology has become essential in the modern world, our reliance on it is somewhat concerning. In some locations GPS signals are blocked by obstructions, so the systems that rely on them are useless. Worse yet, GPS signals can be intentionally spoofed or jammed, which could lead to widespread chaos and tragedy. These problems could be averted by using self-contained motion sensors rather than signals from a constellation of satellites. But that would require motion sensors thousands of times more accurate than the types that we have in our smartphones and other consumer electronics. The technology does exist today, but in order to be accurate enough to replace GPS, a quantum inertial measurement unit is the only option available. These systems require six atom interferometers, with each being large enough to fill a small room. That is not exactly practical for the vast majority of applications, and as you might expect, these systems are also extremely expensive. Researchers at Sandia National Laboratories have been working on a much more compact atom interferometer, however, which could make precise, GPS-free navigation a practical reality in the near future. The new system is based on Photonic Integrated Circuits (PICs), which make it significantly more compact than the traditional laser systems used in atom interferometers. The new technology is also more resistant to vibrations and shocks, making it ideal for use in challenging environments. PICs are small, durable chips that can perform the same functions as larger, more complex laser systems. These chips integrate various components — like modulators and amplifiers — onto a single platform, making the entire system smaller, more robust, and easier to produce. One key innovation is the development of a silicon photonic modulator, which is crucial for controlling the light in these systems. This modulator allows the system to generate and manage multiple laser frequencies from a single source, eliminating the need for multiple lasers. These novel modulators were also noted to substantially reduce unwanted echoes called sidebands that plague existing technologies. The result is a compact, high-performance laser system that can be used in a variety of advanced applications, including quantum sensors like atomic clocks and gyroscopes. Overall, this represents a significant step forward in making these advanced sensing technologies more practical and deployable in real-world situations. The team also pointed out that the applications of their technology extend well beyond navigation. These sensors could, for example, be used to locate natural resources hidden beneath the ground by observing how they alter Earth’s gravitational force. Further potential applications exist in enhancing LIDAR sensors, quantum computing, and optical communications.
  16. When two Finnair planes flying into Estonia recently had to divert in quick succession and return to Helsinki, the cause wasn’t a mechanical failure or inclement weather—it was GPS denial. GPS denial is the deliberate interference of the navigation signals used and relied on by commercial aircraft. It’s not a new phenomenon: The International Air Transport Association (IATA) has long provided maps of regions where GPS was routinely unavailable or untrusted. However, concern is growing rapidly as conflict spreads across Europe, the Middle East, and Asia, and GPS jamming and spoofing become weapons of economic and strategic influence. Several adversarial nations have been known to use false (spoofed) GPS signals to interfere with air transit, shipping, and trade, or to disrupt military logistics in conflict zones. And recent discussions of anti-satellite weapons renewed fears of deliberate actions designed to wreak economic havoc by knocking out GPS. GPS has become so ubiquitous in our daily lives that we hardly think about what happens when it’s not available A GPS outage would result in many online services becoming unavailable (these rely on GPS-based network synchronization), failure of in-vehicle Satnav, and no location-based services on your mobile phone. Analyses in the U.S. and U.K. have both identified the temporary economic cost of an outage at approximately $1 billion per day—but the strategic impacts can be even more significant, especially in a conflict. The saying is that infantry wins battles, but logistics wins wars. It’s almost unimaginable to operate military logistics supply chains without GPS given the heavy reliance on synchronized communications networks, general command and control, and vehicle and materiel positioning and tracking. All of these centrally rely on GPS and all are vulnerable to disruption. Most large military and commercial ships and aircraft carry special GPS backups for navigation because there was, in fact, a time before GPS GPS is not available in all settings—underground, underwater, or at high latitudes. The GPS alternatives rely on signals that can be measured locally (for instance, motion or magnetic fields as used in a compass), so a vessel can navigate even when GPS is unavailable or untrusted. For example, inertial navigation uses special accelerometers that measure vehicle motion, much like the ones that help your mobile phone reorient when you rotate it. Measuring how the vehicle is moving and using Newton’s laws allows you to calculate your likely position after some time. Other “alt-PNT” approaches leverage measurements of magnetic and gravitational fields to help navigate against a known map of these variations near the Earth’s surface. Plus, ultrastable locally deployed clocks can ensure communications networks remain synchronized during GPS outages (comms networks typically rely on GPS timing signals to remain synchronized). Nonetheless, we rely on GPS because it’s simply much better than the backups. Focusing specifically on positioning and navigation, achieving good performance with conventional alternatives typically requires you to significantly increase system complexity, size, and cost, limiting deployment options on smaller vehicles. Those alternative approaches to navigation are also unfortunately prone to errors due to the instability of the measurement equipment in use—signals just gradually change over time, with varying environmental conditions, or with system age. We keep today’s alternatives in use to provide a backstop for critical military and commercial applications, but the search is on for something significantly better than what’s currently available. That something looks to be quantum-assured navigation, powered by quantum sensors. Quantum sensors rely on the laws of nature to access signatures that were previously out of reach, delivering both extreme sensitivity and stability As a result, quantum-assured navigation can deliver defense against GPS outages and enable transformational new missions. The most advanced quantum-assured navigation systems combine multiple sensors, each picking up unique environmental signals relevant to navigation, much the way autonomous vehicles combine lidar, cameras, ultrasonic detectors, and more to deliver the best performance. This starts with a new generation of improved quantum inertial navigation, but quantum sensing allows us to go further by accessing new signals that were previously largely inaccessible in real-world settings. While it may be surprising, Earth’s gravity and magnetic fields are not constant everywhere on the planet’s surface. We have maps of tiny variations in these quantities that have long been used for minerals prospecting and even underground water monitoring. We can now repurpose these maps for navigation. We’re building a new generation of quantum gravimeters, magnetometers, and accelerometers—powered by the quantum properties of atoms to be sensitive and compact enough to measure these signals on real vehicles. The biggest improvements come from enhanced stability. Atoms and subatomic particles don’t change, age, or degrade—their behavior is always the same. That’s something we are now primed to exploit. Using a quantum-assured navigation system, a vehicle may be able to position itself precisely even when GPS is not available for very long periods. Not simply hours or days as is achievable with the best military systems today, but weeks or months. In quantum sensing, we have already achieved quantum advantage—when a quantum solution decidedly beats its conventional counterparts. The task at hand is now to take these systems out of the lab and into the field in order to deliver true strategic advantage. That’s no mean feat. Real platforms are subject to interference, harsh conditions, and vibrations that conspire to erase the benefits we know quantum sensors can provide. In recent cutting-edge research, new AI-powered software can be used to deliver the robustness needed to put quantum sensors onto real moving platforms. The right software can keep the systems functional even when they’re being shaken and subjected to interference on ships and aircraft. To prevent a repeat of the Finnair event, real quantum navigation systems are now starting to undergo field testing. Our peers at Vector Atomic recently ran maritime trials of a new quantum optical clock. The University of Birmingham published measurements with a portable gravity gradiometer in the field. At Q-CTRL, we recently announced the world’s first maritime trial of a mobile quantum dual gravimeter for gravity map matching at a conference in London. My team is excited to now work with Airbus, which is investigating software-ruggedized quantum sensors to provide the next generation of GPS backup on commercial aircraft. Our full quantum navigation solutions are about to commence flight safety testing with the first flights later in the year, following multiple maritime and terrestrial trials. With a new generation of quantum sensors in the field, we’ll be able to ensure the economy keeps functioning even in the event of a GPS outage. From autonomous vehicles to major shipping companies and commercial aviation, quantum-assured navigation is the essential ingredient in providing resilience for our entire technology-driven economy.
  17. A Falcon 9 successfully launched an Earth science mission for Europe and Japan May 28 as part of the European Space Agency’s ongoing, if temporary, reliance on SpaceX for space access. The Falcon 9 lifted off from Vandenberg Space Force Base in California at 6:20 p.m. Eastern. The payload, the Earth Cloud Aerosol and Radiation Explorer (EarthCARE) spacecraft, separated from the upper stage about 10 minutes after liftoff. Simonetta Cheli, director of Earth observation programs at ESA, said in a post-launch interview that controllers were in contact with the spacecraft. “It is all nominal and on track.” Spacecraft controllers will spend the weeks and months ahead checking out the spacecraft’s instruments and calibrating them, she said. That will allow the first release of science data from EarthCARE around the end of this year or early next year. EarthCARE is an 800-million-euro ($870 million) ESA-led mission to study clouds and aerosols in the atmosphere. The spacecraft carries four instruments, including a cloud profiling radar provided by the Japanese space agency JAXA at a cost of 8.3 billion yen ($53 million). JAXA dubbed the spacecraft Hakuryu or “White Dragon” because of the spacecraft’s appearance. The 2,200-kilogram spacecraft, flying in a sun-synchronous orbit at an altitude of 393 kilometers, will collect data on clouds and aerosols in the atmosphere, along with imagery and measurements of reflected sunlight and radiated heat. That information will be used for atmospheric science, including climate and weather models. “EarthCARE is there to study the effect of clouds and aerosols on the thermal balance of the Earth,” said Dirk Bernaerts, ESA’s EarthCARE project manager, at a pre-launch briefing May 21. “It’s very important to observe them all together at the same location at the same time. That is what is unique about this spacecraft.” Other spacecraft make similar measurements, including NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) spacecraft launched in February. “The observation techniques are different,” he said. “We observe the same thing but observe slightly different aspects of the clouds and aerosols.” He added that EarthCARE would use PACE data to help with calibration and validation of its observations. Development of EarthCARE took about two decades and resulted in cost growth that Cheli estimated at the pre-launch briefing to be 30%. Maximilian Sauer, EarthCARE project manager at prime contractor Airbus, said several factors contributed to the delays and overruns, including technical issues with the instruments as well as effects of the pandemic. One lesson learned from EarthCARE, Cheli said in the post-launch interview, was the need for “strict management” of the project, which she said suffered from challenges of coordinating work between agencies and companies. The mission also underscored the importance of strong support from member states as it worked to overcome problems, she added. Another factor in EarthCARE’s delay was a change in launch vehicles. EarthCARE was originally slated to go on a Soyuz rocket but ESA lost access to the vehicle after Russia’s invasion of Ukraine. The mission was first moved to Europe’s Vega C, but ESA decided last June to launch it instead on a Falcon 9, citing delaying in returning that rocket to flight as well as modifications to the rocket’s payload fairing that would have been needed to accommodate EarthCARE. Technically, the shift in launch vehicles was not a major problem for the mission. “Throughout the changes in the launchers we did not have to change the design of the spacecraft,” said Bernaerts. He said that, during environmental tests, engineers put the spacecraft through conditions simulating different launch vehicles to prepare for the potential of changing vehicles. “From the moment we knew that Soyuz was not available, we have been looking at how stringently we could test the spacecraft to envelope other candidate launchers. That’s what we did and that worked out in the end.” EarthCARE is the second ESA-led mission to launch on a Falcon 9, after the Euclid space telescope last July. Another Falcon 9 will launch ESA’s Hera asteroid mission this fall. “We had a good experience with Euclid last year,” said Josef Aschbacher, ESA director general, in a post-launch interview. “Our teams and the SpaceX teams are working together very well.” The use of the Falcon 9 is a stopgap until Ariane 6 enters services, with a first launch now scheduled for the first half of July, and Vega C returns to flight at the end of the year. “I hear lots of questions about why we’re launching with Falcon and not with Ariane, and it’s really good to see the Ariane 6 inaugural flight coming closer,” he said. Those involved with the mission were simply happy to finally get the spacecraft into orbit. “There is a feeling of relief and happiness,” Cheli said after the launch. “This is an emotional roller coaster,” said Thorsten Fehr, EarthCARE mission scientist at ESA, on the agency webcast of the launch shortly after payload separation. “This is one of the greatest moments in my professional life ever.”
  18. Maker Ilia Ovsiannikov is working on a friendly do-it-yourself robot kit — and as part of that work has released a library to make it easier to use a range of lidar sensors in your Arduino sketches. "I have combined support for various spinning lidar/LDS sensors into an Arduino LDS library with a single platform API [Application Programming Interface]," Ovsiannikov explains. "You can install this library from the Arduino Library Manager GUI. Why support many lidar/LDS sensors? The reason is to make the hardware — supported by [the] Kaia.ai platform — affordable to as many prospective users as possible. Some of the sensors [suported] are sold as used robot vacuum cleaner spare parts and cost as low as $16 or so (including shipping)." The library delivers support for a broad range of lidar sensors from a unified API, Ovsiannikov explains, meaning it's not only possible to get started quickly but to switch sensors mid-project — should existing sensors become unavailable, or pricing shift to favor a different model. It also adds a few neat features of its own, including pulse-width modulation (PWM) control of lidar motors that lack their own control system, using optional adapter boards. While the library is usable standalone, and even to perform real-time angle and distance computation directly on an Arduino microcontroller, Ovsiannikov has also published a companion package to tie it in to the Robot Operating System 2 (ROS2). "[The] Kaia.ai robot firmware forwards LDS raw data — obtained from the Arduino LDS library — to a PC running ROS2 and micro-ROS," he explains. "The ROS2 PC kaiaai_telemetry package receives the raw LDS data, decodes that data and publishes it to the ROS2 /scan topic." More information on the library is available in Ovsiannikov's blog post, while the library itself is available on GitHub under the permissive Apache 2.0 license. more information: https://kaia.ai/blog/arduino-lidar-library/ https://github.com/kaiaai/LDS
  19. has been discuss in their forum https://www.agisoft.com/forum/index.php?topic=2420.0
  20. Traditionally, surveying relied heavily on manual measurements and ground-based techniques, which were time-consuming, labour-intensive and often limited in scope. With the somewhat recent emergence of photogrammetry, surveying professionals gained access to powerful tools that streamline processes, improve accuracy and unlock new possibilities in data collection and analysis. How videogrammetry works Videogrammetry is effectively an extension of photogrammetry; the mechanics are grounded in similar principles. Photogrammetry involves deriving accurate three-dimensional information from two-dimensional images by analysing their geometric properties and spatial relationships. The core of videogrammetry is to separate the video footage into images with regard to sufficient overlap and image quality. The subsequent workflow is almost the same as in photogrammetry. The quality of the output depends heavily on image resolution, frames per second (FPS) and stabilization. Integration with hardware and software Central to videogrammetry is the camera system used to capture video footage along with any supporting GPS devices used. Ground-based videogrammetry may utilize smartphone cameras or handheld digital cameras with video capabilities. While many smartphones and digital cameras have built-in GPS capabilities that geotag photographs with location data, their accuracy is at best 2.5m which is not always optimal for professional surveying. For this reason, when using a smartphone camera, many surveyors opt for an external real-time kinematic (RTK) antenna with 2cm accuracy. Attaching it to their smartphones enables them to receive correction data from a local NTRIP provider. This is a far more convenient option than placing and measuring ground control points (GCPs) on the terrain in combination with a GNSS device. Some of the well-known smartphone RTK products on the market today include REDCatch’s SmartphoneRTK, ArduSimple’s ZED-F9P RTK receiver, and Pix4D’s ViDoc RTK. It is important to note that while most smartphones can geotag photographs, not all can geotag video footage. Moreover, those smartphones that can geotag video can only geotag the first frame of the video. For this reason, users may require additional apps (e.g. PIX4Dcatch: 3D scanner) that can embed location data into the video’s metadata so that it can be used for surveying and mapping purposes. While non-geotagged videos can still be used for 3D model creation in various software solutions, it is recommended to opt for an RTK receiver as a minimum prerequisite for professional applications. At 3Dsurvey, utilization of Google’s Pixel 7a smartphone paired with REDcatch’s external RTK is the preferred option. Later this year, 3Dsurvey is set to release a ScanApp that it has developed to embed RTK correction data into video file metadata, enabling automatic georeferencing for videogrammetry projects. Examples of videogrammetry project approaches Non-geotagged (using any smartphone or camera) This can be ideal for the 3D documentation of cultural heritage projects. However, this approach lacks the spatial accuracy necessary for tasks such as outdoor mapping or infrastructure monitoring, which demand precise georeferencing. Accurately geotagged using external RTK (using a smartphone) Accurate geotagging using a smartphone equipped with an external RTK GNSS receiver ensures that the resulting 3D models maintain high spatial fidelity. Therefore, this approach is suitable for applications such as land surveying and small-scale construction monitoring projects where precise positioning is crucial. Examples include mapping manholes, pipes, low-level material piles, dig sites and cables for telecommunication or electricity. Within a larger photogrammetry/Lidar project For situations demanding the highest level of accuracy, videogrammetry can fill a gap or add another perspective to the aerial dataset obtained using other technologies, such as ground-level videogrammetry in combination with aerial Lidar (which lacks oblique views). Videogrammetry can also prove invaluable on site while drone mapping, such as when trees obstruct the flight path or if the project requires capturing details facing upwards. Similar to drone workflows, strategically placed and precisely measured GCPs can significantly improve the overall precision of the generated 3D model. Since videogrammetry usually involves capturing data from low angles, consider using AprilTag targets for superior oblique detection. Challenges and considerations Videogrammetry offers immense potential for various applications, yet its implementation comes with a set of challenges. Filming excessive footage can result in software inaccuracies, leading to duplicate surfaces in the 3D model. Therefore, it is important to carefully consider the path taken when filming. Some areas may be challenging to film, but if those areas are not captured in the video, they cannot be included when reconstructing the 3D model. Filming while navigating through obstacles, especially on construction sites, requires caution and precision on the part of the user. This sometimes gets in the way of creating a perfect video. Weather conditions such as puddles and sunlight can affect data accuracy by creating reflective surfaces and casting shadows, respectively. Filming areas obstructed by roofs, trees or walls can degrade the RTK signal, leading to inaccuracies in the final model. Tips for accurate data capture The quality of the output depends on a number of factors, including how the data is captured. The following basic principles can help users to obtain the necessary coordinates when filming so that the 3D model will be as realistic as possible: Move slowly and steadily: To obtain sharp images, maintain slow and smooth movements. This is especially crucial in poor light conditions when the shutter speed is low and video frames are more susceptible to blur. Rotate slowly and move while turning: Just like when towing a trailer, it is necessary to move back and forth rather than trying to turn on the spot. Don’t ‘wall paint’ when scanning vertical surfaces: Standing in one place while tilting the device up and down will generate a lot of images, but they will all have the same coordinates. Instead, move in a lateral direction while recording at different heights. Film in connected/closed loops: Try to ensure that the filming ends precisely back at the starting point. Advantages of videogrammetry Videogrammetry offers significant advantages in surveying, particularly when smartphones are leveraged as data capture devices. The portability and convenience of smartphones enables swift, efficient and accessible data collection in a wide range of situations, making it possible to document areas that are either small, rapidly changing, or require close-up details in hard-to-reach places. Moreover, unlike traditional methods requiring specialized equipment and expertise, smartphone videogrammetry empowers more professionals to capture and reconstruct 3D data. The standout feature of videogrammetry is that it eliminates overlap concerns, since the video is continuously shot at approximately 30FPS. This accessibility paves the way for even broader surveying applications. Integrating videogrammetry into a data collection toolkit promises to accelerate project timelines, streamline workflows and above all improve responsiveness, since a smartphone is always on site. This makes videogrammetry a cost-effective solution for surveying tasks. Videogrammetry in practice These three case studies illustrate the practical application of videogrammetry in various situations, ranging from a simple scenario to a mid-sized construction site. A Pixel smartphone with RTK and the 3Dsurvey ScanApp were used in all cases. 1. Pile volume calculation Photogrammetry is commonly used to calculate the volume of material piles, but the pre-flight preparation and planning can be time-consuming. Moreover, drones are bulky and less portable than a smartphone. Therefore, videogrammetry – with a handheld RTK antenna connected to a smart phone – can offer a much faster and simpler alternative. In this project to capture a small pile of material, the smartphone was simply held up high, tilted down and encircled the heap. During a five-minute site visit, a 75-second video was recorded, from which 152 frames were extracted. The processing time amounted to 30 minutes. 2. Preserving cultural heritage Archaeological excavation sites and statues that require 3D documentation are often located in crowded urban areas which may be subject to strict drone regulations. Some culturally significant items may be located indoors, such as in museums, where photogrammetry is challenging. Moreover, using ground surveying equipment like laser scanners requires highly technical knowledge. This can be an expensive option and therefore unsuitable for such projects. In a project to capture a complete and accurate 3D scan of a dragon statue, a total of 20 minutes were spent on site. 226 frames were extracted from 113 seconds of video. The subsequent processing time was one hour. 3. Underground infrastructure project Videogrammetry can be successfully used in the context of underground construction and engineering projects, such as when laying pipes into a trench. Compared with documenting the site with traditional equipment, using a smartphone equipped with RTK technology makes the process of documenting remarkably efficient. Just as with drone photogrammetry, multiple mappings can be performed to track progress. As another advantage, videogrammetry makes it possible to get really close and record details that may be hidden from the top-down aerial view. In support of a construction and engineering project for the installation of underground fuel tanks, videogrammetry was used to document and extract the exact measurements, monitor the width and the depth, calculate the volume and extract profile lines. A 15-minute site visit was sufficient to record 87 seconds of video, leading to 175 extracted frames. The processing time amounted to 45 minutes. Conclusion While it should be pointed out that it is not a replacement for established surveying techniques like photogrammetry and laser scans, videogrammetry has emerged as a valuable addition to the surveyor’s toolkit. Overall, this technology offers professionals significant gains in convenience, efficiency and flexibility, because surveyors can capture data using just a smartphone. This allows for faster and more accessible data capture across versatile situations, and provides cost-effective solutions adaptable to specific needs thanks to integrating seamlessly with existing surveying workflows. While filming with a smartphone currently has its limitations that present challenges, continuous advancements in hardware, software and best practices are steadily improving the accuracy and reliability of data collection. This ongoing evolution means that videogrammetry has the potential to contribute to better-informed decision-making and become an indispensable tool for the modern construction professional. source : gim-international
  21. Lurker

    Cluster polygons

    something like this? python - Polygon clustering - Geographic Information Systems Stack Exchange
  22. what version of qgis do you use right now?
  23. is it the error occured in all the historical data you've check?
  24. my previous post address your first question actually 😁 I saw that you already got the the equation for the band value and the depth, so it should be okay you directly apply it to get depth value
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist