Visualizing Pavement Distress–The Complete Story of Pavement Inspection

Pavement management incorporates data collected utilizing various methods to gain a complete view of how the pavement is performing through its life-cycle.  One of the most common practices in pavement inspection is imaging utilizing high-resolution cameras mounted on vehicles outfitted with precision GPS and inertial navigation.  This imaging, when combined with laser profiling, constitutes a typical pavement inspection setup utilized by many DOTs as well as Local government agencies.


Pavement Inspections tend to follow a process that in many cases is proprietary and “black box” in nature.  This makes it hard for the purchasing agency to see how their roads were inspected and how the resulting pavement condition scores were generated.  Our team of Engineers and GIS professionals have worked hard to develop a process to remove the “black box” related pavement inspection and to make it easy and simple to trace inspection results back to their originating distresses from the field.

First, our entire process is geospatial in nature from the get-go.  Our van’s location is tracked in six-dimensions in real-time and this information is used to calculate the exact location of pavement cracks in the resulting images.  Next, the pavement images are geospatially referenced in 3-d and 1mm-pixel resolution, making it easy to extract low-severity cracks in a true 3-d environment.  This process then allows us to create GIS vectors (points, lines and polygons) of each distress for each pavement image and deliver them to our clients as part of the pavement inspection deliverables.


This is a crucial piece to the pavement inspection “story” because it shows the purchasing agency exactly what distresses were identified and measured when creating the pavement condition scores for a section of road.  Being able to see these distresses on a map helps to complete the story by providing the ability for a rigorous QA/QC process utilizing some simple GIS tools.

Each Section of road can be colored by the condition score and its range of values.  This tells one component of its story.  The underlying distress information tells the rest of the story related to “How” a section of road was scored and assigned its inspection score.  By having this information at their fingertips, pavement inspection personnel have a GIS-centric and user-friendly tool that allows them to QA/QC pavement inspection data efficiently.




Mobile LiDAR to Support Positive Train Control

This article was originally written in 2011, but is being re-posted based on recent events…

DTS/Earth Eye just completed a positive train control (PTC) project for a national train company who was evaluating the differences between Airborne LiDAR and Mobile LiDAR to support the collection of PTC data.  They are currently collecting airborne data for approximately 15,000 linear miles of rail.  In certain areas, the airborne data does not provide enough fidelity to accurately map the rails or the asset infrastructure that support the railroad operations.

From Wikipedia – “The main concept in PTC (as defined for North American Class I freight railroads) is that the train receives information about its location and where it is allowed to safely travel, also known as movement authorities. Equipment on board the train then enforces this, preventing unsafe movement. PTC systems will work in either dark territory or signaled territory and often use GPS navigation to track train movements. The Federal Railroad Administration has listed among its goals, “To deploy the Nationwide Differential Global Positioning System (NDGPS) as a nationwide, uniform, and continuous positioning system, suitable for train control.”

The project involved the collection of Mobile LiDAR using the Riegl VMX-250 as well as forward-facing video to support PTC Asset Extraction.  The system was mounted on a Hi-Rail vehicle and track access was coordinated through the master scheduler with the Railroad company.  Once we had access to the tracks, we had one shot to make sure the data was collected accurately and we had complete coverage.  All data was processed on-site to verify coverage and we had a preliminary solution by the end of the day that was checked against control to verify absolute accuracies.  We collected the 10-mile section of rail in about 2 hours and this timing included a couple of track dismounts required to let some freight trains move on through.


The following graphics illustrate the point cloud coverage colored by elevation (left) and Intensity (right).



Mapping the rails in 3D was accomplished by developing a software routine designed to track the top of the rail and minimize any “jumping” that can occur in the noise of the LiDAR data.  Basically, a linear smoothing algorithm is applied to the rail breakline and once it is digitized the algorithm fits it to the top of the rail.  The following graphic illustrates how this is accomplished – the white cross-hairs on the top of the rail correspond to the breakline location in 3D.


So, back to the discussion about Airborne PTC vs Mobile PTC data.  Here is a signal tower collected by Airborne LiDAR.  The level of detail needed to map and code the Asset feature is lacking, making it difficult to collect PTC information efficiently without supplemental information.


The next graphic shows the detail of the same Asset feature from the mobile LiDAR data.  It is much easier to identify the Asset feature and Type from the point cloud.  In addition to placing locations for the Asset feature, we also provided some attribute information that was augmented by the Right-of-Way camera imagery.  By utilizing this data fusion technique, we can provide the rail company with an accurate and comprehensive PTC database.


This graphic shows how the assets are placed in 3D, preserving the geospatial nature of the data in 3D which is helpful when determining the hierarchy of Assets that share the same structure.


One last cool shot of a station with all of the furniture, structures, etc that make it up – pretty cool!


Roadway Characteristics Inventory for DOTs Using Mobile LiDAR Technology

Roadway Characteristics Inventory for DOTs Using Mobile LiDAR Technology

DOTs across the Country are mandated by the Federal Government to keep track of their roadway assets and to report against these assets to receive Federal funding for their maintenance and repair. Many DOTs conduct Roadway Characteristics Inventories (RCI) on an annual basis to update and maintain their data relative to these assets. Traditionally, this has been completed using a boots-on-the-ground approach which has been very effective at building these inventories. Many DOTs are experimenting with other technologies, namely mobile LiDAR, to conduct these inventories and to achieve many other benefits from the 3D data captured in the process.

The next graphic illustrates the typical technology solution utilized for these projects. It is composed of the Riegl VMX-450 LiDAR unit, coupled with High-definition Right-of-Way (ROW) imagery. This system can collect at rates up to 1.1 KHz (1,100,000 pts/sec) at a precision of 5mm. It collects points in a circular (360-degree) pattern along the right-of-way from 2 scanner heads facing forward and to the rear of the vehicle in a crossing pattern. The laser captures 3D points at a density of 0.3 foot at speeds up to 70mph. This scanner can be adjusted to scan at a rate that is applicable for the project specifications to limit the amount of data collected and to ensure that the resulting point cloud data is manageable.


Right-of-Way imagery is also co-collected along with this LiDAR point cloud data. These images are used to identify appropriate attribution for each feature type being extracted from the point cloud. In this example, the DOT has digitized Shoulder, Driveway Culvert Ends, and Drainage Features (Culverts, Ditches and Bottom of Swale). Additional Features such as Signs, Signals, Striping, and Markings will also be extracted and then reported to the Feds on an annual basis. The mobile LiDAR data provides a 3D surface from which to compile the data and then the ROW imagery can be used for contextual purposes to support attribution. This methodology provides an effective process that can be used to create 3D vector layers and accurate attribution used to build a robust Enterprise GIS.

Both the ROW imagery and the mobile LiDAR can be used to collect and extract the RCI data efficiently for the DOTs and provides the DOT with a robust data set that can be leveraged into the future. The ROW imagery is typically used to map features at a mapping-grade level while the LiDAR can vary a bit in accuracy. Since the relative accuracy inherent in the LiDAR is very precise, it is used to conduct dimensional measurements related to clearances, sign panel sizes, lane widths, and other measurements that require a higher precision.

The DOT utilizes the derivative products from this RCI exercise to report to the Feds in a way that is pretty basic, but effective to achieve their level of funding. For example, the data capture is very technical in nature and focuses on high precision and accuracy. Then, the RCI data is extracted from this source data, maintaining a level of precision that is dictated by the source data. Then, the DOT takes this precise data and aggregates it up to a higher level and reports the total number of Signs or the lineal feet of guardrail. Even though the reporting of this data is pretty basic in nature, the origins of the data can still have precision and accuracy and can be used for other purposes related to Engineering Design or Asset Management.

In conclusion, mobile LiDAR and Right-of-Way imagery are a safe and accurate way to collect and report against RCI variables for DOTs. This methodology promotes a safe working environment for both the DOT worker and the traveling public. It is also a cost-effective way to collect large amounts of 3D point cloud data which can be utilized for other purposes within the same Agency.

Who is Checking Your LiDAR Data?

Throughout the years, I have seen many projects advertised, awarded, executed and then delivered to the client. The client receives the data, copies it locally and then final payment is made to the vendor and life goes on as usual. Then, someone actually checks the data and notices that there are many discrepancies associated with the scope of work and what was actually delivered. How does this happen and how can it be avoided?

Step 1 – Start with a Clear Scope of Work

The scope should define exactly what is going to be collected, how it will be collected and how it will be verified and checked after delivery. For example, a simple LiDAR scope must define the target point densities (LiDAR), hydro-flattening parameters, and accuracies (absolute and relative) for the project. The scope should also define how the client will be checking the data for final acceptance of the deliverables.

Step 2 – Process a Pilot Area

The pilot area should be representative of the overall project and should be processed and delivered as if it was its own project. This allows for the team to identify any processing issues or special techniques up-front so that the rest of the project can move forward in a linear fashion, thus limiting the re-visiting of the data to fix problems at a later date. Once the pilot area is delivered, it should be checked against the scope of work to ensure that all deliverables are being met in accordance with the client’s expectations.

Step 3 – Process the Entire Project

Final processing can occur once the pilot area is collected and accepted. This is a critical-path item that is the bulk of the project’s budget. Many projects will either be successful or a turn into a disaster during this phase. The risk is easily mitigated, though, as long as the first two steps of this process are in place and properly executed by the team. This is very reliant on communication between the vendor and the client and if these channels are in place, the project will most likely run smoothly since everyone is on the same page.

Step 4 – Data Validation and QA/QC

This is where the overall success of a project is either validated or issues are identified that must be resolved before final delivery is accepted. The processes for checking these data sets are specific for different type of deliverables – we will focus on some niche market deliverables and give examples of how to check their associated data elements.


First off – make sure you have some kind of software that can open this data. Seems simple, but many clients do not have the most rudimentary piece of the puzzle – LiDAR viewing software. There are many commercial-off-the-shelf (COTS) products that can be used and each one has its strengths and weaknesses. The goal is to be able to load the entire project in one place and then use the tools within the software to verify the deliverables. The most important items to check include:

· Average Point Density across the project

· Relative (flight line to flight line) accuracies – this should be half of the stated RMSE for the project (e.g. 5cm for a 9.25cm RMSEz spec or 7.5cm for a 15cm RMSEz spec.)

· Absolute (overall project) accuracies against ground control. Ground control should be on a hard surface and un-obscured and is typically tested to a 95% absolute accuracy specification). A minimum of 20 points is required, since one point out of 20 will get you to the 95% specification. Larger areas can require significantly more control.

· Data classifications (e.g. Ground, Vegetation, Overlap Points, Low Point/Noise, etc.) as per the project specifications (ASPRS or USGS publishes these specifications).

· Check terrain edits (look for berms that are removed, building points in ground, low point noise and other anomalous data in the wrong classes).

· Projection information in the LAS file header.

· Verify Intensity TIFFs as per user-specified requirements.

· If breaklines are required, check the following:

          o Water bodies meet minimum size criteria,

          o Interior points classified to water class and

          o Client-specified buffers around these features

          o Single drains (streams) meet minimum length and width requirements and are buffered as per client specifications.

          o Double-line drains (Rivers) are monotonic (perpendicular elevations to remove leaning) and are buffered as per client specifications.

          o For all breaklines – check elevations are at or slightly below terrain for a sampling of tiles for the project (typically 10% of project).

· Review the survey report

· Flightline trajectories with appropriate metadata, flight logs, and other raw data collection activities (GPS, inertial, etc).

· Metadata for all project deliverables (this can be automated with a metadata parser).

In conclusion, it is important to check your data immediately upon receipt, so that all quality control and quality assurance activities can be performed and verified while the data is still relevant. Good luck!

Utilizing Mobile LiDAR to Support Pavement Resurfacing

Many Departments of Transportation are looking for ways to save money while increasing safety on the roads. In order to do this, they are seeking out innovative ways to do this while utilizing new technology. Mobile LiDAR is being used to determine roadway geometry information for long stretches of roadways that are candidates for resurfacing. The typical DOT procurement process involves the selection of a resurfacing vendor through a competitive bid solicitation and then the selection of the most qualified and “cost-effective” bidder. As budgets have become leaner, the competition for these projects has increased and thus, drives the innovation curve to find the most cost-effective solution for the DOT.


To achieve this goal, pavement vendors have sometimes turned to the use of LiDAR information to develop their bid packages for the DOT. Historically, vendors would use the as-built information that was available from the DOT which might be inaccurate, old or obsolete. This obviously leads to issues with the information that the pavement vendor uses to develop their bid packages. They are most interested in determining the correct amount of cut/fill needed to resurface the road while using the least amount of new material. One of the most important pieces of this puzzle relates to the cross-slop of the road which facilitates roadway drainage and ultimately makes a road safer for the traveling public.


Mobile LiDAR provides a high-precision, digital terrain model of the roadway surface that can be used to generate very accurate cross-slope measurements at specific intervals. For example, the road surface is continuous for the entire length of the project. Cross-Slopes can be generated for each travel lane as well as for the shoulders. The extracted cross-slope is then compared to the design specification and colored based on whether it is in compliance or out of compliance.


Once the areas have been identified that are out of compliance, it is easy for the pavement vendor to target those for the re-design effort. Instead of applying an average value across the entire section of road, specific areas can be identified and re-designed so that the pavement vendor can save the DOT money on materials. The ultimate benefit for both the pavement vendor and the DOT lies in the fact that everyone benefits – Pavement vendors can design roads more accurately and limit their risk of material over-runs while the DOT can select the most cost-effective vendor and have more budget available to pave their ever-increasing network mileage of roads.


Since mobile LiDAR data is very cumbersome to manage (2Gb/mile) it is important to deliver the data in a format that is usable by the client. Sometimes raw LAS files work and sometimes the client can only deal with vector files that will be used in GIS, Autocad or Microstation, to name a few. We have found that KMZ files are useful as a delivery mechanism because they can be easily loaded and viewed by the client in very short order. Any derivative of these delivery mechanisms will work – it just depends on the expertise of the client and their computing environment.


Future discussions will focus on the DOTs and their collection of mobile LiDAR data so that they can provide it to all of the pavement vendors and receive the most cost-effective bid packages. Although there is an up-front cost associated with the LiDAR collection, it is believed that the downstream cost savings for both the DOT and the pavement vendor will more than outweigh the up-front cost of collecting the mobile LiDAR data.

Sign Retroreflectivity Compliance and Asset Management

Over the past few years, there have been many projects designed to determine an agency’s sign retroreflectivity compliance across their road network. Each project has been unique in terms of how the agency collected the data and how they ultimately managed the data into the future. Recent MUTCD regulations require the development of an inventory management program that documents the installation, maintenance and construction characteristics of sign infrastructure. Many agencies are faced with the daunting task of funding a replacement program that will comply with these new regulations into the future. Ultimately, the replacement plan needs to address non-compliance issues that are identified during the inventory/inspection process.

Step 1 – Sign Inventory

The first step in the compliance process begins with an accurate inventory. Signs can be collected utilizing many different techniques and each technique can have its pluses and minuses. Field collection programs can involve inspectors walking the roads, mobile imaging vehicles taking pictures of the roads as well as other collection techniques designed to identify compliance issues along the road. No matter which solution is selected, it needs to satisfy the overall goals and objectives of the project while providing an accurate inventory of the agency’s sign infrastructure.


Next, an agency needs to be able to match their available funding to the technology solution that achieves their project goals and objectives. It also needs to understand the trade-offs that are the necessary evil in projects like this – available funding typically dictates the quality of the solution that can be provided by the service provider. Furthermore, the quality of the data collected and its usefulness can be impacted by the choice of the solution and available funding.

Remember that the ultimate goal of retroreflectivity compliance is centered on the replacement of signs once they fall below the minimum reflectivity standard as defined by FHWA. Many agencies would rather start replacing signs today instead of spending money to create their inventory and a management plan. This makes sense economically in the short-term, but can introduce problems from a long-term management perspective.

Step 2 – Estimating the Replacement Cost of the Sign Network

The next graphic illustrates the total replacement cost as calculated using the FHWA “Sign Retroreflectivity Guidebook” for an agency with a 4,383 centerline mile road network.


The cost to replace all signs for this agency approaches $17.5 million dollars. Please note that this does not include the cost of the labor, equipment and other material costs incurred for the actual installation of these signs. The inventory of signs for this agency cost approximately $800k or roughly 5% of the total replacement cost for these signs. Although significant, this investment is crucial to ensure the longevity of the Sign Management program designed to manage these assets throughout their life-cycle.

Step 3 – Choosing a FHWA-Approved Sign Management Methodology

The chart below illustrates the advantages and disadvantages related to a few of the FHWA-recommended methodologies. Most of these methods have been implemented in one way or another at various agencies across the Country.


The “Measured Retroreflectivity” method is popular at many DOTs and Toll Authorities. I believe this is the case because these agencies typically manage facilities that carry higher volumes of traffic that operate at higher speeds, thus increasing the risk and potential consequences of an accident. Many County and City agencies are utilizing the “Visual Nighttime Inspection, Expected Life, Control Sign, or Blanket Replacement” methods to manage their sign infrastructure. Each mentioned method is used for different reasons (financial vs. headcount) and has a lot to do with legacy management techniques (“We’ve always done it this way”).

There really isn’t a management method that can be considered “The Best” or “The Most Cost-Effective”. It is solely dependent upon an agency’s goals and objectives for the management of their sign infrastructure. I typically recommend conducting an inventory first and then implementing a management plan that uses the concepts of Condition, Risk, and Valuation to help prioritize which signs should be replaced along with the best timing for the replacement. This can prove very valuable since the highest risk signs can be replaced first and the least risky signs can be programmed for replacement as funding becomes available.


Finally, I also recommend that agencies utilize asset management software to manage the work performed on their sign infrastructure so that all replacements can then be managed according to their useful life and actual condition rating. This information can then be used in concert with one another to help develop a capital improvement plan that details the planned fiscal expenditures for the next 10 years, which is the typical life-cycle of a sign.

Automated Pavement Distress Analysis – The Final Frontier?

 We have been working with some automated methods for quantifying crack measurements and have had some interesting results.  How great would it be to collect pavement images, batch them on a server and have it spit out accurate crack maps that you can overlay in a GIS?  The technology is here!  Or, is it?

Most pavement inspections involve intricate processes where pavement experts rate segments visually, either from field visits or rating pavement images in the office.  This introduces a lot of subjectivity in the rating results and typically culminates in a spreadsheet showing pavement ratings by segment.  The data is then modeled using ASTM performance curves that have been built from industry proven pavement experiments.

There is no doubt that these curves are tried and true representations of how pavement performs in varying physical and environmental conditions and each project should take these factors into consideration when developing the preservation plans for an agency.

We have been working to develop a rating workflow that focuses on a combination of automated and manual processes to bridge the current gap of Quantitative and Qualitative pavement inspections.  The way we are doing this is through the application of GIS to the automated rating process.  Here’s how it works…

First, we begin with a pavement image from our LRIS pavement imaging system.  Images are captured at a 1mm-pixel resolution and then analyzed through an automated image processing workflow.


The resulting image creates a “crack map” that identifies the type, severity and extent of the distresses on that section of pavement.  The process is fully automated and handled by the computer.


Once we have the crack maps in place, we then apply a manual editing process that is GIS-centric by nature and the resulting crack map is a more accurate representation of the real-world conditions.


Once the edited crack maps are compiled, the data is exported to a GIS where the extents are calculated geospatially and then integrated with a pavement management system.  This is where all of the Pavement Condition Indices (PCI) are calculated and applied to each agency’s specific pavement rating methodologies.  Since the process is geospatial in nature, it is easily imported to ANY pavement management software and gives our clients the flexibility to apply any rating methodology they desire.


Of course, all agencies have a certain spending threshold and there are cases where automation is the only way to cost-effectively manage large volumes of data.  We recognize this fact and are working hard to bridge the gap of available funding and high quality data.