How much storage would offline maps take?

  Google
Add Your Answer
Answers (4)

Questions

  • Can I assume we are talking about the US google maps only? Yes
  • Can I assume we want to take into consideration compressions of the photos? Yes
  • Can I assume we are talking about the maps only, not live or satellite? Yes

Inputs

  • Total area of the US is about 3.5 M square miles
  • To cover 1 square mile we need around 500 photos assuming each photo can cover 10 square feet.
  • Each photo not compressed needs an average 2 MB.
  • If we compress the photos, then each one can be average 0.5 MB

Outcome To cover all the US land we need 3.5 M (US land) X 500 (number of photos per 1 Sq miles) X 0.5 (the size of the compressed photo) = 1.75 M X 500 = 9 B around 9 B MB == 900,000 GB. This to cover the photos. Now we need also to cover other information like

  • Places
  • Photos contributions
  • Reviews and ratings
  • Other information

Let us assume that each 1 Square mile on average has 10 places across the US Each place will have on average 200 – 300 photos as contributions Let us take 300 Each Square mile will need 10 X 300 photos Total photos for all the US = 3,000 X 3,500,000 = 11,500,000 Size needed = 11,5 M X 0.5 = 5.75 M MB = 575,000 GB Total size = 900,000 + 575,000 = 1475,000,000 GB

Not an expert, taking a stab at it…

Scoping:

-For the entire world? Assume yes.

-Just include roads and locations, or do we need to include photos, map images, street-view, business hours, etc? Assume only roads and named businesses / addresses, whatever is required for basic navigation.

Map is a graph with nodes connected by edges (road segments). Road segments can have curvature. Twisty roads can be represented by multiple segments. We don’t need 100% accuracy.

We also need to connect the map in some way to Lat / Long for GPS.

Locations have:

Name

Street number

Road name

City

State

Country

Zip / Postcode

Lat / Long

ID

Road segment ID

Road Segments have:

Length

Direction

Road name

Curvature

Street number range

ID

Nodes have:

Lat / Long

Road Segment IDs

Estimate average data size for each (~200 bytes, 50 bytes, 10 bytes)

Estimate sq miles of terrain in the world (~50M)

Estimate average locations, road segments, and nodes per square mile (based on breakdown of the world into urban, suburban, rural, and uninhabited)

(70, 40, 20)

Multiply and add it up… 8.5×10^11 bytes

~850 GB

Apply some compression ratio… this is mostly text, we need lossless compression… estimate 60%.

~340 GB

I think that number is on the high side. My estimates are on the high side across the board, and the compression may be quite a bit greater. It’s probably between 10GB and 100GB.

I would use similar assumptions below.

  1. Can we assume we’re talking about the test/metadata on maps and not the photos, business metadata, satellite imagery and streetview – yes
  2. I’m going to assume here you want us to calculate offline storage for entire world – yes
  3. Is it for a single user/instance of this – yes
  4. Also I’m going to ignore compression at the moment but can talk about it later if necessary – ok

So so a few top line ways to calculate this. Maps is really broken up in a bunch of grids, with each grid being an image. And so total calculation can be

a) space required per avg grid on maps X b) # of grids to cover the whole world.

I’m going to modify this to be a) space per grid * b) # of grids in a square mile X c)# of square miles in the world.

a) My street is about 50feet. I think we can fit about 6 of these roads/streets each way so I’m going say 300ft by 300ft is a grid. This will be say between 0.5mb to 1mb. For simplicity we can say 1M for now.

b) Each mile is 5000 ft. So there are 16×16 of the 300sqft grids in a square mile = 250 grids in a square mile.

c). Let’s calculate area of the US. LA to Seattle is 1000 miles (approx). US is about 2.5 X wider than it is long (when I look at it visually) so I’m going to say 1000 miles x 2500 miles = 2.5M sq miles area of the US.

US area vs rest of world. We know world is 70% water so let’s ignore that for now. for the 30% US seems to be just below 10% of the land mass (think out loud of how you have Africa, South America, Asia, Australasia, Europe, Antartica etc). Seems to be < 10% but maybe not below 5%. So let’s say 5%. So land mass of the world is 20 * land mass of the US.

So let’s work out how many grids to cover USA. 2.5M sq miles / 250 grids (b). = 10,000 grids to cover the US.

20 * 10,000 grids = 200,000 grids for the whole world.

so a) 1mb * 200,000 grids = 200,000 MB = 200 GB.

Final answer is 200 GB. Gut check this seems reasonable that we can store the entire land mass maps area on our phones/storage for about 200GB. Would love to get your thoughts.

I have no idea if this would qualify, but I took a different, rather singular approach. Curious and gracious in advance for any and all feedback. My main question is whether this response goes into enough depth with regards to all the content types that Google Map stores, as I chose to focus uniquely on image storage (but am sure to state that clearly before I began):

Clarifying questions:

  • global map or US/sectional map? Assume global
  • storage on a single end-user device? Yes
  • consider compression in storage? Ignore compression [IMPORTANT ASSUMPTION here]
  • satellite, map, street views, or all 3? Assume map
  • storage of each map layer or a single layer? Assume most granular map layer
  • image storage format is standard JPEG? Sure

For simplicity sake, and because we can easily extrapolate by scaling up at a later point, we will target the most granular map layer on Google Maps. We will therefore ignore all metadata (business/POI/GPS/etc- related data) and focus on the imaging aspect of Google Maps.

A simple formula therefore surfaces with 2 main factors: (photo size / photo) * (# photos)

Assuming no overlap in photos and that each individual photo is stored uncompressed, we can equate the most granular layer of Google Maps to a patchwork of individual high-res JPEGs which take up ~0.5 Mb of storage on average.

Since we’ve already assumed factor 1 to be equal to 0.5Mb, we will focus on estimating the total number of individual photos required for the entire surface area of the globe at the most granular Google Maps layer.

We begin by leveraging our own knowledge of Google Maps. We can estimate that the most granular map layer can fit ~10 average sized US houses back-to-back in one single photo. Assuming the average length of a US house is 30 feet, we estimate the corresponding real-world length of a Google Map photo to be 300 feet at the most granular layer. Assuming square photos, we have 300 ft x300 ft.

Next, we want to divide the surface area of the Earth by this area to see how many individual photos are required.

Recalling that the surface area of a circle is: pir^2 and the surface area of a sphere is simply this multiplied by a factor of ‘4’, we have: 4pir^2. I recall the diameter of the Earth being approximately 8k miles, so the radius therefore being half that, or 4k miles. If you don’t know the radius or diameter of the Earth, you can also estimate this if you know the circumference (~25k miles). The formula for circumference is 2pir, so r = 25k/(2pi) = 12.5k/pi = 4k

Back to our equation, so plugging the value for the radius into the equation yields:

4pir^2 = 43.14(4000 miles^2) = 12*(16 M miles^2) = 192 M miles^2 = 200 M miles^2 (rounding up for simplicity’s sake).

Lastly, we must divide this value of Earth’s surface area by the Earth’s surface area present in one Google Map photo. However, we calculated Earth’s surface area above in square miles but we calculated the photo’s surface area in square feet. We can roughly convert the latter to square miles by squaring the fraction for which 300 feet comprises a single mile (5280 ft). In other words, 300/5280 reduces to approximately 1/X. What is X?

If 30010 = 3000, 30020 = 6000 and 30015 = 4500, one more split should get us closes to our value. Can choose 17 or 18, but will choose: 30018 = 5400. Therefore, we see that 300 feet is approximately 1/18 of a mile. Squaring this (18*18) yields 324. We can round this down to 300 for simplicity sake.

Therefore, we estimate that one most-granular-level photo in Google Maps represents 1/300 of a square mile.

The final step is to divide the surface area of the Earth by the surface area of a single photo, or:

200 M miles^2 / 1/300 miles^2 / photo => 200 M * 300 photos = 60 Billion photos.

Using our assumption that each photo takes on average 0.5 Mb of storage space, this is 30 Billion Mb. Dividing by 1000 for Gb and by another 1000 for the answer in Tb, we arrive at approximately 30,000 Tb or 30 Pb.

A gut check against this value reveals it to be significantly high, although we have chosen to ignore a critical component here: compression. Also, we are talking about storing high-quality photos of the entire surface of the Earth at the most granular level available in Google Maps, so this may not be all that unrealistic given that it’s uncompressed.

If we wanted to extrapolate outwards to include all layers (assume 20 layers total from the farthest zoomed out to this one, the most granular), I would determine at which layer it is necessary to have multiple images. Example, at the most zoomed-out layer, you have a single image of the Earth seen from space. How many “zooms” does it take to get to where multiple images start being required? I would assume at the third zoom, or layer 17 if we were analyzing layer 1 in the above example. Assuming each layer is an equal distance “zoomed out” from the next, you may be able to extrapolate with a simple formula using the storage size of a single photo to the storage size of all photos at the most granular layer which we did above. Not 100% sure at this point however if it would be linear.