Jump to content

mamadouba

Inactive Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by mamadouba

  1. What is your imagery source?
  2. The .MTL.txt is a text file and the ARD .xml is an XML. You cannot just change the extension and expect an xml schema to be parsed like the text file. Harris has not developed a parser for this yet, but they will likely do it in the future just like reading in Sentinel-2 .xml. In the mean time, you can simply develop your code in IDL to parse the XML file and pull the metadata of interest. Those parameters can be used in an ENVITask workflow chain. I had to do this when the Collection-1 MTL changed from the original and broke the ENVI parser. The GeoTiFF itself will open like any other image.
  3. It could be the correct approach, but it's basic or applied research that will take trial and error through scientific process. There is no canned answer. There are plenty of studies that support NDVI, or other vegetation indices (both narrow and broadband), serving as a proxy for crop health, vigor, LAI, etc. Red edge has also shown promise in detecting the subtleties between a green, healthy plant and a relatively green, unhealthy plant. The common denominator among these studies are field data; field data that has been collected in a meaningful and statistically rigorous manner so you can model the relationship between a vegetation index and the phenomenon of interest, in this case blight. Have you collected ground truth data on plants with blight and plants without? Can this be discerned at the spatial resolution that you are collecting at (5 meter overhead)? If you find this out and can recommend optimal collection parameters and methods using UAS to detect blight, then I highly suggest that you publish your results.
  4. UAS or aerial thermal infrared calibration = field data collection. Period!
  5. Hi Georgeina, The data products that I was referring to in my post are only related to Landsat ordered through the ESPA interface. ESPA provides the raw digital number (DN) along with a variety of higher level scientific products such as surface reflectance (SR), spectral indices (e.g., NDVI), Fmask (cloud and shadow), etc. The spectral reflectance product has undergone radiance conversion and atmospheric correction using a physical based first principles radiative transfer model, namely 6S. LEDAPS is applied to Landsat 4,5, and 7 and a similar algorithm (SR8) is applied to Landsat 8. The purpose of ESPA is bypass all of the manual digital image processing that an analyst would otherwise have to undertake themselves (geometric, radiometric, atmospheric correction) so you can focus on the analysis. In other words, ESPA provides research ready products. With regard to your work, are you just conducting an analysis, or do you want to learn these techniques yourself? It is good to know the fundamentals behind image pre-processing and the best way to learn is to conduct the work yourself. Is Erdas your primary image processing software or do you use ENVI, or other?
  6. Georgeina, what software do you have access to? Do you have access to a USGS EarthExplorer account? You can order your products in bulk using the USGS ESPA interface (https://espa.cr.usgs.gov/login). You can place an order up to 5000 scenes for L4/5, L7, and L8. The available products include surface reflectance and several different indices (e.g., NDVI, EVI, SAVI, etc.), including a cloud/shadow mask for masking NULL values. https://landsat.usgs.gov/sites/default/files/documents/espa_odi_userguide.pdf
  7. Highly turbid water can cause positive NDVI values and saturate pixels from illumination effects. Run the following formula and see if you can extract the water pixels (assuming NDVI was calculated properly). water = fix(ndvi LT 0.01 AND toa5 LT 0.11) OR fix(NDVI LT 0.01 AND toa5 LT 0.05) LT = less than toa = Top of Atmosphere reflectance, number designation is band number. This is the water formula used in the Fmask algorithm by Zhe Zhu of Boston University. It performs very well, except where water is shallow or highly turbid. If this formula fails to discriminate the water pixels for the problem area, then your NDVI is probably correct and it's an image-based anomaly. Radiometric correction doesn't solve everything.
  8. Do you have access to Amazon Web Services? Upload data to an S3 bucket, spin up an EC2 instance, and use AWS Batch with the software you were intending.
  9. zabdi3l, jake stated that the imagery was sourced from the Earth Explorer Surface Reflectance Product; therefore, the data are already calibrated and atmospherically corrected to surface reflectance using LEDAPS (Landsat 4-7). There is no need to calibrate in ENVI, or any other software. However, the data do have a scale factor of 0.0001 which must be taken into consideration and applied if you want to convert data from 16-bit integer back to floating point..
  10. The surface reflectance products have a scale factor of 0.0001. For example, a value of 1000 is actually 0.10. Rescale the data first or rescale your formula coefficients, either way works.
  11. I couldn't agree more with rahmansunbeam, especially if you are applying image processing to civil source remote sensing data. Google Earth Engine provides every function incorporated in a standard remote sensing software package, plus a host of "shallow" machine learning algorithms for map classification, and petabytes of data in their catalog. The only resources that a user needs is a web browser and some knowledge of JavaScript, but their API documentation and examples are easy to follow. If you need to go further, and coding expertise and funding is available, you can fire up an EC2 instance on AWS. I run all of my deep learning and machine vision analysis using a multi-GPU EC2 instance and the pricing is nominal.
  12. If you are referring to the simple "dehaze" options in Erdas, then you shouldn't expect perfection. Depending on the source and particulate, some haze cannot be removed. If you are referring to atmospheric correction of Level 2A, then you should be using Sen2Cor. http://step.esa.int/main/third-party-plugins-2/sen2cor/
  13. The distinction here is Digital Surface Model (DSM) vs Digital Elevation Model (DEM). I think of the DEM as the bare surface elevation and the DSM as heights above that surface. Taking the difference between these two models gives a normalized DSM. You can create electro-optical (EO) point clouds from imagery, but you will not be able to do this with Sentinel or Landsat because only Nadir data are provided. EO point clouds require multiple collects at multiple look angles to apply photogrammetric techniques for deriving heights. Only agile commercial sensors like Worldview provide this capability, and UAS. Here's some information: http://www.harrisgeospatial.com/Home/NewsUpdates/TabId/170/ArtMID/735/ArticleID/14517/3D-Point-Cloud-Generation-from-Stereo-Imagery.aspx
  14. Are you interested in yield monitoring or biomass monitoring? These are two separate metrics in which the former refers to crop production and the latter refers to the entire vegetative structure. For the sake of this discussion, let's stay with biomass. 1. Absolute biomass: estimating this metric from remotely sensed data requires field data for modeling.....period. 2. Relative biomass: simply refers to an increase or decrease compared to the baseline. This can actually be conducted using NDVI because of the high positive correlation between NDVI and biomass, and yield, and LAI, and etc. Let's assume you do not have field data to establish how strong NDVI correlates with your particular area of interest, but there's enough literature to cite that will support using NDVI as a proxy or surrogate variable for biomass. I don't know what your monitoring period is, but you need to establish a robust baseline which cannot be captured within the Sentinel-2 record alone. You will need to develop a time series dataset composed of Landsat and Sentinel-2. The simple workflow is create a multitemporal NDVI time series dataset over several years to develop an average NDVI for the peak growing period (i.e. biomass metric) and compare current observations against. Your resulting statistic will be the difference from the baseline, or relative biomass.
  15. The FLAASH algorithm isn't the issue, it's the smoke. Smoke is an aerosol that has major attenuation effects at all optical wavelengths. There is no radiative transfer model out there, be it FLAASH, MODTRAN, 6S, etc. that will model at-surface reflectance if the sensor and specified wavelength cannot see the reflected surface. Furthermore, this is exacerbated by the narrow channels (lower reflected energy captured) on hyperspectral sensors.
  16. If the masked values are actually NAN in ENVI, and not 0, you can use the following expression (example replaces NAN with 999 and all other values are preserved): (999 * finite(b1,/nan))+(finite(b1)*(b1 > (-1e+34)))
  17. There are two schools of thought on this topic, deriving crop types by phenology or spatial downscaling. Phenology metrics: The researcher either develops or applies existing land cover maps to derive long-term time series statistics from vegetation indices (e.g. 250-meter MODIS NDVI). These phenological temporal profiles can be used to derive broad category crop types, winter vs. summer crops. For example, winter grains (wheat and barley) in the Middle East will show a start-of-season (emergence) around January to February, the greenup will peak around April, and the profile will begin to drop in May-June signifying senescence and harvest. If the region is double-cropped, the same principles can apply to the summer crops. Knowing the specific crop calendars will serve as guides. There are a number of peer-reviewed research journals explaining variations of this approach. Spatial downscaling: This seems more in line with your current resources. This requires allocating or disaggregating information to a different scale (i.e. 10KM to 1KM). Cross-entropy and other techniques apply. http://www2.toulouse.inra.fr/lerna/chercheurs/thomas/projets/ADD%20WP%203%20(RChakir)%20v2.pdf
  18. sigologo, even better, Allard de Wit has posted all of his IDL code to Github and it includes a Savitzky-Golay routine that models data within a user-defined range. https://github.com/ajwdewit/idl_adewit You will find HANTS under the hants directory and Savitzky-Golay under the sagof directory. These scripts were developed to process time series image stacks so no need for looping (I wish I had these a year ago). Here's what he says about SavGol, "This is an implementation of the Savitsky-Golay filter for processing time-series of satellite data. It uses ENVI for tiling over the stack of satellite images. This implementation is very close to the original implementation by Chen et al (2004) but it has some drawbacks that it does not do iterative filtering like HANTS does (could be added easily though)."
  19. Thanks for the contributions sigologo and oz1. This has turned into a very informative thread. Choosing an optimal spectral index is important since no index is perfect. I tend to apply NDVI for most landscapes except tropical because saturation typically is not an issue for arid or poor yield regions. NDVI is also a simple calculation because it does not require extra multiplicative factors or coefficients such as the high dynamic range indices. Unless ground truthing field data have been collected, many of these factors are arbitrarily chosen through experience and/or heuristics (e.g. SAVI soil factor of 0.5). In response to sigologo about Savitzky-Golay (SG) and FFT, Thse are built-in functions in IDL, but requires extra coding to loop through image slices and pixel-wise observations (i,j) because they are meant to process 1-D data. However, there are other non-commercial options using Python, SPIRITS, 52North, TimeSat, HANTS, etc. SG is simply a pice-wise polynomial regression that requires a user-defined window to calculate the regression and degree of polynomial. SG alone does a great job at smoothing data, but does not deal with outliers. FFT, which I use Harmonic Analysis of Time Series (HANTS), is an iterative FFT that calculates the underlying sine waves (seasonal signal) and uses this signal to rebuild the series to a user-defined range of good data. I use the MODIS QAQC flags to flag bad pixels and serve as a mask for HANTS. Excellent at modeling missing data or outliers unless cloud cover is too persistent (tropics). HANTS is freely available from here, http://gdsc.nlr.nl/gdsc/en/tools/hants, and the developer, Allard de Wit provides good documentation on research and application. I use HANTS and SG together. I first remove the noise in my Tim series using HANTS and then I apply minimal smoothing using SG.
  20. Before you start choosing sensor data and methodologies, you need a good understanding of your landscape. Desertification needs to relate to the rate of change in the landscape. If rate of change is extremely high, then comparing data over a few years may work. If rate of change is low, and the environmental conditions of the landscape are extremely variable, then that must be accounted for in your analysis, otherwise you will result in erroneous rates of change. My suggestion, take the entire time series record of 250-meter MODIS NDVI over your study area; you can use the 16-day composites for your work. This will give you over 350 observations from February 2000 to present. You can temporally smooth the dataset using a Savitzky-Golay filter or harmonic analysis (Fourier Transform), or a combination of both. After the time series has been preprocessed, you can choose to run a mean or median deseasoning function if you wish, but it's not necessary. The final step of the analysis is to run a pixel-wise trend statistic using the Mann-Kendall or Seasonal Mann-Kendall. There are two ways to do this. (1) Extract only the seasons for each year that correspond with peak greenness and run the Mann-Kendall on the subset. The median trend (Thiel-Sen trend) will give you the rate of change (negative slope is decreasing and positive slope is increasing). The Mann-Kendall statistic provides a degree of significance expressed as Z-scores (-1.96 < >1.96 can be considered significant change). (2) The second approach is running a Seasonal Mann-Kendall on the entire dataset and extract the same statistics. There are a number of software packages to facilitate this analsysi, including IDL, Matlab, R, etc. However, Idrisi (now called Terrset) has all of these functionalities built into the Earth trends Modeler (ETM). Just be aware that desertification is a time series issue that requires exploitation of the temporal domain. It is not a simple image difference between a few dates.
  21. NDVI and LAI are positively correlated, just as NDVI and agricultural yields can be positively correlated. This is what makes NDVI a useful proxy for modeling these types of biophysical relationships. Unless you are actually sampling LAI in field plots, you're not actually developing an LAI that can be trusted (i.e., field collection vs. physical models). Since sampling is labor and time intensive, the idea is to collect a statistically robust LAI sample throughout your region of interest, develop a statistical relationship between LAI and NDVI, and then extrapolate and model this relationship using NDVI derived from remotely sensed data. Here's one of many papers devoted to this very subject: http://www.cof.orst.edu/cof/fs/turner/pdfs/turner_rse_1999.pdf
  22. Use this R script to facilitate RandomForests for image classification. Create training and validation samples in GIS of choice, field attributes relate to class type, and run script against associated layer stack. There's no need to create a separate training raster dataset. Instructions are in the same link. https://bitbucket.org/rsbiodiv/randomforestclassification
  23. You can 100% apply image classification to a pan-sharpened image. There is plenty of peer-reviewed literature supporting this, including my own research. Arhanghelul makes excellent points on OBIA and taking advantage of the textural and spectral information in the data, in addition to the variety of non-parametric classifiers that are capable of extracting meaningful information from fused datasets. The final point from pasfans01 settles it entirely; if the accuracy assessment validates the results, then the approach is sound.
  24. Cactuz, Decision rules are arbitrary unless they are derived using a statistical function like recursive binary partitioning. There's not a one-size-fits-all rule for specific cover types. The more well known programs for classification and regression trees (CART) for remote sensing purposes are See5 and Cubist by RuleQuest. Idrisi has a built-in Decision Tree classifier dating back to Andes edition. If you are using ENVI, you can apply some of the built-in non-parametric classifiers such as Support Vector Machine (SVM) or Neural Net. These are machine learning algorithms capable of handling both continuous and categorical data. If you have your mind set on decision trees, then you can explore some of the free and good options. A developer on Google Code implemented C45 (free source code of See5) in IDL. This allows you to create decision rulesets using the See5 approach and the output is in the ENVI Decision Tree format. Just choose the option in ENVI > Classification > Decision Tree and "Execute Existing Decision Tree." I've used this program myself and the results are impressive. The developer created a decent GUI if you aren't comfortable with IDL command line. You can find the code here: https://code.google.com/p/c45idl/ However, my favorite free program is the R implementation of RandomForests. This is an extremely robust classifier. It requires knowledge of R, plus dependent libraries. You can download the randomForests script here: https://bitbucket.org/rsbiodiv/randomforestclassification ....and there are installation instructions and tutorials here: https://bitbucket.org/rsbiodiv/toolsforr http://www.whrc.org/education/indonesia/pdf/DecisionTrees_RandomForest_v2.pdf Good luck
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist