OLD NEWS 2005 - 2013

Back to latest news

July 1, 2015 - Change in Forecast

Parg pushed a Vuze change late Monday July 29 and it had immediate effect. The lesson is that if your application becomes popular, you really need to contribute more to the network than you take, we can't repeal the laws of supply and demand. Follow the Vuze discussion over on zzz.i2p.

Things took a dive again at midnight last night, and we've had several reports of issues probably related to the leap second. We're investigating, but you should check your router and your eepsite logs. If it looks like things are semi-dead, restart I2P.

Changing the forecast to clearing.


June 24, 2015 - Change in Forecast

3 weeks since the 0.9.20 release, and it clearly made the build success worse, even though we made several changes in that release that were intended to add capacity to the network and make things better. Really don't know why. We gave up whatever improvement happened in 0.9.19, and then some. We're disussing possible causes and solutions on IRC, and I've also restarted the discussion with parg about Vuze over on zzz.i2p.

We're really starting to see a lot of instability, especially during peak times (weekends, and weekday evenings UTC). Changing forecast to stormy.

By the way, the jump in the leasset chart in early June was partially due to the 0.9.20 release, but also partially due to me fixing the way the stat is calculated.


April 8, 2015 - 0.9.18 and 0.9.19, Change in Forecast

Haven't posted an update in a while. March and early April have seen a big drop in exploratory tunnel build success for several reasons: (More info on this zzz.i2p thread)

  1. Reduction in number of floodfills, due to tightening of criteria in 0.9.18
  2. Increased CPU load on floodfills, due to a change in 0.9.18 which increased the number of routers sending encrypted lookups
  3. A doubling of leasesets in the network and a large increase in the number of tunnels, due to the Vuze rollout of version 5.6.0 supporting I2P. This all started on March 11.
  4. Poor overload handling in the router Job Queue, which often resulted in a router stuck at 100% CPU after it got there. Fixed in 0.9.19.

Put all this together and we have build success that's at it's lowest in several years. Fortunately, this doesn't appear to be causing widespread reliability issues, at least for Java I2P; i2pd had a lot of trouble (due to lack of profiling in tunnel peer selection algorithms) but I believe that its 0.9.0 release improves things. 0.9.19 should improve things significantly. However, the update will temporarily add several thousand more leasesets from i2psnark's update feature. This may make the update cycle extremely bumpy.

Changing the weather to partly cloudy for now. and severe thunderstorms next week.


November 19, 2014 - Build success improvement in 0.9.16

Exploratory tunnel build success jumped up several points in the last two weeks, coinciding exactly with the release of 0.9.16. Good news, although I don't have any theories why. To be researched.


October 23, 2014 - stats.i2p services fixed for ECDSA and EdDSA

I have fixed several bugs related to key certificates in the stats.i2p registration and subscription services. You may now register hosts with ECDSA or EdDSA signatures.

You may create eepsites with these signatures by selecting the appropriate option in i2ptunnel when you create a new server tunnel. ECDSA P-256 and P-384 eepsites will only be accessible by clients running I2P 0.9.12 or higher with Java ECDSA support. ECDSA P-521 eepsites will only be accessible by clients running I2P 0.9.15 or higher with Java ECDSA support. EdDSA eepsites will only be accessible by clients running I2P 0.9.15 or higher.

Thanks and welcome to our first registered ECDSA P-256 eepsite, the opentracker tracker.thebland.i2p.

We will be changing the defaults over the next year to use stronger signatures. A way to migrate existing eepsites to stronger signatures is in the planning stages.

Edit Oct. 24: More fixes done.

Edit Dec. 9: Jump server fixed, see this post for details.


July 1, 2014 - stats.i2p migrated to new host

I've moved stats.i2p to a new host that will hopefully be faster and more reliable. Some things may have broken in the move. If you find anything, please let me know. The graphs will have discontinuities due to the move.


April 27, 2014 - Most stats removed on dashboard

Published client tunnel build stats were removed in 0.9.12 and are not available. Total floodfills stat is no longer reliable on my router due to the increase in floodfills, and changes in the code that make it much less likely that any one router knows all the floodfills all the time. This directly affects the total routers and total leasesets graphs too. Check out bigbrother.i2p for a look at total network size based on cooperative sampling.

Sad to see it go, but it's inevitable given the growth of the network. I started stats.i2p almost nine years ago, just after I had started playing with I2P. Back then we had maybe a few hundred in the network a day and 1500 unique routers seen per month. Every router essentially knew every other router. I kept separate stats and graphs for each router in the network. After a few years, as we became more confident in operating the network, we reduced the stats each router published, making a site like stats.i2p much less useful. But at this point, every router still knew every floodfill router, making lookups - and stats - easy.

The other big change in the last few years was the growth in floodfills, from a total of three hardcoded routers to a real, dynamic DHT with over a thousand. No longer does every router know every floodfill router. We have a "real" DHT with iterative lookup. Even more problematic for stats, the number of floodfills a router does know fluctuates dramatically, frustrating attempts to use a simple "fudge factor" to estimate network size. Do a web search for "DHT size estimation" for more background on the problem. While the floodfills still publish stats on how many routers and lease sets they know, taking an average and multiplying by the number of known floodfills and a fixed fudge factor has become far too unreliable to publish as graphs.

We now cling to exploratory build success as our remaining network health indicator.


March 31, 2014 - More stats to go away

Removing published client tunnel build stats in 0.9.12, so these will not be available in the netdb anymore. Also, as you can see on the dashboard, the total floodfills stat is quite erratic, due to the increase in floodfills, and changes in the code that make it much less likely that any one router knows all the floodfills all the time. This directly affects the total routers and total leasesets graphs too. So several of the dashboard charts will be removed soon.


February 20, 2014 - stats.i2p now subscribed to no.i2p

The stats.i2p hosts list is now subscribed to the no.i2p hosts list. Hosts registered on no.i2p that are approved by that site and listed in its alive-hosts.txt subscription will be auto-added to the stats.i2p feed after several days. Hosts registered on no.i2p no longer must be re-registered on stats.i2p. Note that the stats.i2p and no.i2p terms of service are not identical; hosts may be rejected by one and approved by the other.


February 7, 2014 - Big reduction in tunnel rejects

Wow. 0.9.10 cut client tunnel build rejections by about 40% so far (5% of build requests), with about 70% of the net updated. I assume this is due to a lower threshold for connection idle timeout, which resulted in less connections, and less rejections. It looks like client tunnel build success went up by about 5% so far, so it all got converted to success. Another clue that connection limits are still a big issue. But I need to go back and study it, to see if my theory is correct. I did not anticipate this. Great news.


January 10, 2014 - Happy New Year

Finished 2013 serving 144,482 jump requests, up 54% from 2012. I got requests for 4017 unique host names, with the #1 site getting 11,066 requests. As usual, the top 20 requested sites were dominated by several chans and other sites I've chosen not to publish in my hosts.txt, and several Russian sites.


December 12, 2013 - Router count adjustments

0.9.9 was released a few days ago. It reduces the flood redundancy from 5 to 4 (thus reducing apparent router and lease counts) and lowers the eligibility for floodfills (number of tunnels), which will increase the number of floodfills a little (thus increasing the apparent router and lease counts). Both of these changes are, of course, goodness that makes Sybil have to work harder. I think the first of these two adjustments will predominate, but we'll see. I will be eyeballing and tweaking the stats over the coming days.


November 22, 2013 - Marketplace review completed

Today I went through all the pending registrations for marketplaces, with help from a couple of volunteers. I accepted one, and rejected seven either for being unreachable or not having an acceptable TOS. The rejected ones were removed from the pending queue. As always, you may email me to request restoration by including a link to your new or revised TOS.


November 16, 2013 - Reminder - TOS for "markets"

I'm sitting on several "marketplace" hostname registrations waiting for a TOS to be posted for each of them. On or about November 22, I'm going to look for and review them for compatibility with the updated hostname registration TOS. If a TOS is not found or is not suitable, the pending registration will be removed at that time.


November 2, 2013 - Change in forecast

Let the sun shine!


October 22, 2013 - Registration TOS Updated

I have updated the hostname registration TOS to require that "marketplace" eepsites must have a posted TOS for review. If you have registered a market eepsite and it is not yet visible on the new hosts page, I am holding it and awating your TOS to be posted. If you do not post a TOS by November 22, the pending registration will be removed.


October 11, 2013 - Last-up dates back on jump service

Thank you very much to "slow" and his inr.i2p services for answering my call for help and providing a feed of last-up times with data going back to September 2011. This restores a service that I used to provide by scraping tino's inproxy. If I find the time I may merge his data with my old tino data (covering October 2005 - September 2011) to create a consolidated database covering 8 years. If somebody else wants to volunteer I will provide the old and new data sets.


October 11, 2013 - Change in forecast

The churn due to rekeying was not as bad as feared. Things were actually pretty good this week. I changed the forecast to "clearing" a few days ago, on October 7.


October 11, 2013 - Don't panic about 0.9.8 routers

The dashboard shows about 15% of the network is still on 0.9.8, 10 days after we released A lot of that is our PPA users. Since the 0.9.8 problem affected Windows users only, it was not necessary for kytv to build a PPA package.

I saw some other discussion about network size stats going down. Some of that is certainly due to Windows users stuck on 0.9.8 and needing to manually fix it. That's bad but can't be helped. But it could also be just a decline from the boom of the past few weeks.


October 2, 2013 - Stop the madness

I got somebody doing a HEAD on stats.i2p every 5 minutes, and somebody else fetching newhosts.xml every 10 minutes, and somebody else doing a garbage GET every hour. Please fix your sh*t, and don't poll more than once per hour. Kthx.


September 21, 2013 - Change in forecast

Big jump in new users is causing network congestion, especially on weekends. Echelon reports about a 25% jump in routers over the last week or two. Probably due to the new Russian laws. Some of this may be Android too. Changing forecast to "weekend thunderstorms". Long range forecast: Severe storms week of Sept. 30 when the network rekeys after the 0.9.8 release.


August 11, 2013 - Moar Floodfills

As you've probably seen, we made class N routers automatically floodfill in 0.9.7. That only increased the floodfill numbers to about 525. Hottuna and I talked at Defcon about making class M floodfill also, but we agreed we should get some data on how the class N ffs are doing first. We may still want to do this for 0.9.8, because the update-via-torrent hordes are going to temporarily double the LeaseSets. But even so, the M ffs would come along too late to help...


May 31, 2013 - Tunnel Stats Glitch (again!) After 0.9.6 Release

An inadvertent change in the way tunnel stats were published in 0.9.5 caused the tunnel stats to look bad after that release. Took me a little while to panic and then find the problem (see March 13 entry below). Well, the way I fixed it in the stats.i2p processing was to look for router version 0.9.5. So, sure enough, 0.9.6 comes around and I forgot all about that, and 0.9.6 != 0.9.5 so it all happens again (including the panic). Fixed. At least until next time. Sometimes the screwups are just embarrassing.


May 15, 2013 - Unlimited Floodfills

As you may know, we've been gradually increasing the number of floodfills over the years, ever since that scary time on March 13, 2008, when we only had three hard-coded floodfills, and two of them went down. However, as long as we had a lower "minimum" than the number of routers that "wanted" to become floodfill, a determined attacker could overwhelm a floodfill, which would cause that router to turn off floodfill and not come back. As the UCSB researchers pointed out, this isn't good.

The minimum was changed to 300 in 0.9.2 and 360 in 0.9.3. After discussion, and as a result of the UCSB research, we made a large jump to 500 in 0.9.5. As you can see on the dashboard this did not increase the floodfill count much, it was about 400 both before and after the 0.9.5 release. This means we have accomplished our goal - all eligible routers will become floodfill. Great news!


March 13, 2013 - Stats fixes after 0.9.5 release

Router version graph was broken for a couple days after the release, a delayed consequence of the rrdtool update.

Tunnel build stats were broken from the release until today, a consequence of an unintentional change to the stat format (some harmless cleanup wasn't so harmless...). It looked like tunnel build success rates were crashing but they weren't really. Thanks to dg and Meeh for helping to diagnose.


February 11, 2013 - Rrdtool update

Updated to a more recent version of rrdtool to fix the graphs with no scale on the Y axis. Fonts look a little different too.


February 9, 2013 - Viewmtn down, possibly for good

My mtn repo is broken and it probably won't be fixed for quite a while. Please use alternates kindly set up by meeh and kytv.


January 4, 2013 - Jump Stats for 2012

stats.i2p served 93,594 jump requests in 2012, for 3141 unique host names. The most popular request, by far, was for rusleaks.i2p (now down), with 11,361 requests. Happy new year!


December 24, 2012 - Change in forecast

75% of net updated to 0.9.4 and build success rates have greatly improved. Changing forecast to sunny. Merry Christmas.


December 19, 2012 - Change in forecast

One third of net updated to 0.9.4 and looks like build success rates are improving. Too early to declare victory, but changing forecast to partly cloudy, clearing by Christmas.


December 11, 2012 - Change in forecast

Network continues to deteriorate. Not just connection breakage, but trouble getting leasesets, and trouble with initial connections. Changing forecast to rain with a chance of storms, but should clear by Christmas as we get 0.9.4 released.


November 20, 2012 - Hang in there

Just found and fixed the stupid bug that's dropped the crucial tunnel build success stats by 10-15% since the 0.9.2 release. Sure, the big influx of Russian users is a part of it too. But happy to have found the problem and looking forward to getting the 0.9.4 release out in a few weeks to get the graph pointing back up again. We work so hard to make things better so when we take a step backwards it stinks. Oh well. Onward and upward.


October 10, 2012 - Graph glitches

As you can see, the 3 graphs on the top of the dashboard are getting glitchier. The glitches in question are on all 3 graphs at once, and caused by the floodfill count, which is used to calculate the estimated router and leaseset graphs. Two theories:
  1. It's the French researchers stopping and restarting their planet lab-based floodfills
  2. We're finally getting enough floodfills that any one router (or, at least, the one in my closet) doesn't know all of the floodfills at once. As normal RouterInfo expiration happens, the total floodfill count ebbs and flows.
A few weeks ago I assumed it was 1) but now I'm starting to think it's 2). Will keep researching it.


October 6, 2012 - Adjusting the graphs

Clearly after the 0.9.2 release, the routers chart was too low. Adjusting it upwards by about 20%. Echelon has confirming data on news.xml fetches. The leases looks way too high, but that could be real. Here are histogram charts of known routers and known leasesets from a snapshot at about 9 PM UTC today. While the data looks really skewed at the top end, the median is only about 5% less than the average for leasesets and 10% less for routers. See the two older entries below for more info.


September 30, 2012 - Change in forecast

Tunnel build success continues to go down, 9 days after the release. Maybe it's only due to the influx in new users, maybe 0.9.2 made things worse, maybe there are possible troublemakers out there. But changing the weather to partly cloudy for now. Causes to be analyzed, and thunderstorms may be coming.


September 22, 2012 - Bumpy weekend

Big influx of Russian users in the last couple days. Combine that with the 0.9.2 release and things may be a little choppy.

0.9.2 again increases the number of floodfills and reduces the flood redundancy. This will again mess with the total routers estimate, and I'll need to tweak the calculation as 0.9.2 kicks in.


August 25, 2012 - Total routers/leasesets adjustment; leaseset experiment / attack underway?

As of 0.9.1, the flood redundancy was lowered from 8 to 6, but I forgot to adjust the formulas here to calculate estimated router and leaseset totals. I just did it, so the graphs will show a big jump in leasesets and a smaller one for routers today.

In addition, somebody is doing some sort of experiment or attack, and there's about double the normal number of leasesets right now. There are three cycles - one starting about the 5th of August, one starting around the 15th, and the largest one starting a couple days ago.

You can all these issues - the gradual slump in leasesets in August, with the abnormal increase superimposed, and the additional jump today caused by my adjustment, on the one month of floodfill graph and the six months of leasesets graph.


August 8, 2012 - No more totals

As of 0.9.1, routers only publish stats with their netdb entry one out of every 4 times (at random). This improves anonymity while still giving us a way to sample the network health, especially tunnel build stats. But this means I can't get a good number for network totals. So say goodbye to the total tunnels chart. Not that network totals meant anything anyway... it's been years since a single router knew all of the network.


June 13, 2012 - Now with ports!

As of 0.9-14 (required on both sides of the connection), HTTP ports are now send end-to-end, and the server converts the port to a vhost, allowing multiple sites on a single destination. Here is my test site. Details on this zzz.i2p thread. Search engine crawlers, hope you have good canonicalization.


May 24, 2012 - Welcome TPB

After confirming that the registration was legitimate, I gave them the previously reserved tpb.i2p and the previously used thepiratebay.i2p, with the same key (b32 here). The latter hostname was registered here years ago, briefly used, and then quickly banned from this service for offensive material. That whole sad event led to stricter registration rules here, and the reservation of certain host names.

If you have the old thepiratebay.i2p in your address book (key starting with 95ng...), you may wish to delete it or change it manually, as subscriptions will not change an existing entry. Or, leave it as-is and just use tpb.i2p. Or make your own alias. Remember, all naming is local.

Content seems to be the thing that brings people to I2P. If you would like to lobby some big web site to move to I2P, let me know and I'll reserve a host name for them.


May 6, 2012 - More Floodfills

By the way, 0.9 increases the min floodfill count to 250 (and it was increased to 200 in 0.8.13). Each time this happens the estimates on total routers need to be tweaked. At some point, maybe we are there yet and maybe not, a typical router does not know all the floodfills. At some other point, even a typical floodfill router does not know all the floodfills. That's why there is a debug page in the console that attempts to estimate the actual number of floodfills (and leasesets) by computing the size of the local "slice" of the netdb DHT. Look at the bottom of that page. This only works if your router is floodfill.

So at some point the min number of floodfills is so high that essentially all eligible routers are floodfill. Not sure what that number is.


May 5, 2012 - 7-Year Anniversary of stats.i2p

Yes, more or less, 7 years of squiggly lines. We had about 1500 unique routers per month back then, now about 22,000. Go back in time on the old news page. More and more nostalgia is on zzz.i2p.


May 4, 2012 - Bandwidth removed from published netDB

In release 0.9, the bandwidth stat is removed from routers' published netdb. This is part of the (years and years-long) plan to remove stats. Any published stats, of course, are some risk to anonymity but we have to balance that with the need to diagnose a network that is rapidly growing and changing. We didn't really need the bandwidth number any more (and the published tx and rx bandwidths had been set to the average of the actualy tx and rx years ago). So say goodbye to the average and total stats. These weren't on the dashboard but they were on the summary page.


February 5, 2012 - Sunny

One month after 0.8.12 released and over 90% of net using it. Only modest gains to build success but the congestion control (WRED) changes really seem to help page load times. Record numbers of routers and lease sets the last two weekends, yet client tunnel build success remaining above 40% worst-case, and almost always above 50%. Build client expire dropped significantly - and immediately - after the release, probably due to the SSU transport fixes. Upgrading the network weather forecast to sunny.


January 3, 2012 - Top 25 Jump requests of 2011

You know, I was going to post these, but they're almost all for sites that are not in my database any more, for various reasons. So never mind. If I'm not going to promote them, I'm not going to list them either. The top is 19,000 jumps for rusleaks.i2p. The next highest was less than 4000.


November 10, 2011 - Clearing

Build success took a huge jump today, 48 hours after the 0.8.11 release. Things were getting better - slowly - every day before that, which was some combination of continued 0.8.10 updates and less influx of new users. Of course now the discussion begins on which of the six changes had the most effect, were they good ideas or not, should they be adjusted, and so on. Some things may become clearer in a week or two after the vast majority of the network has upgraded. Maybe.

As I've said many times, it's very difficult to determine where the congestion-collapse line is, or how far you are away from it... no matter which side of the line you are on. But I'm relieved that things are improving rapidly. Three releases in the last month was a lot of work, and almost every other I2P development was put aside. So hopefully, for now, I can try to find my notes on what I was working on in September.


October 26, 2011 - Another dashboard rework

Got rid of the peer capacity graphs and implemented tracking of the tunnel build success by router class.


October 19, 2011 - The Rusleaks Effect

Total jump requests through stats.i2p for all hosts, year to date: 54,700

Total jump requests through stats.i2p for rusleaks, since Oct. 4: 11,600

Rusleaks jumps per day:

2011_10_03      0
2011_10_04    332
2011_10_05   2712
2011_10_06   1525
2011_10_07   1148
2011_10_08    568
2011_10_09    556
2011_10_10    982
2011_10_11    879
2011_10_12    706
2011_10_13    530
2011_10_14    453
2011_10_15    345
2011_10_16    330
2011_10_17    361
2011_10_18    510
2011_10_19    365  (as of 1600 UTC)

Routers seen by stats.i2p in 30 days before Oct. 4: 17,000

Routers seen by stats.i2p in 30 days before Oct. 19: 21,500


October 19, 2011 - Worsening Weather

Clearly, 0.8.9 provided very little benefit and that was quickly overtaken by continued network growth. We're now at the worst build success since summer '09. Changing weather to 'severe thunderstorms'. Working on some new ideas for 0.8.10, coming soon.


October 14, 2011 - Congestion

OK the 36 hour update frenzy is over, so I re-enabled the dashboard. ** update - disabled again ** ** update - enabled again ** Yesterday (the 13th) was oh-ever-so-slightly better build success rate than the day before, which isn't much, but after 5 days of getting worse every day, I'll take it.

Congestion is a funny thing, you do 10 different things to eliminate it and nothing works and then one day *poof* it all goes away. We'll see in the coming days whether 0.8.9 really helps or whether we have a lot more work to do.


October 12, 2011 - 0.8.9

0.8.9 is out and we have hopes that it reduces congestion and improves tunnel build success. I expect it will help somewhat but there is no magic bullet in there. We will see in a few days. As with the last release, I've temporarily disabled the dashboard so the bandwidth can be used for delivering the update files.


October 5, 2011 - Big Day

Well, this was a big day, thanks to the new rusleaks.i2p. Over 10K routers for the first time, at least 1000 new routers today (the real number perhaps 2000 or more). Tunnel build success continues to decline. The 0.8.9 release next week may help build success a little (due to UDP improvements), but I don't think it will be significant. More work to do on connection limits. I'm not out of ideas yet, that's the good news....


September 24, 2011 - Totals

I saw a comment on IRC about the various "totals" graphs, about how they aren't growing even though the net is growing. That's right. Most of these can only add up the peers that stats.i2p's router knows about at any one time. At one point long ago, every router knew about every other router, all the time. Since router infos are kept in memory, that had to change as the net grew bigger. These days, a typical non-floodfill router knows about 1200 routers out of about 6000 that are probably active at any one time.

Not only that, the peers that a router does know about are not a representative cross-section. A router knows about almost all the floodfills, and most of the fast peers out there. But a much smaller portion of the slower peers. So several of the stats based on a total or average of all known peers (average bandwidth, total hops, etc.) are suspect. Not only are they quite misleading, they are getting even worse as the net grows further.

So I'll be removing some graphs from the dashboard and adding this note to the stats index page.

One other thing, the tunnel build success rate has been trending down for six months and is now at the point where we're having problems. Exploratory tunnel build success bottoms out at 20-25% every day, that's pretty bad. And due to the bad cross-section issues described above, the true average is probably much worse. I think it's mostly due to connection limits again. I'm hopeful the UDP changes coming in 0.8.9 will make UDP connections work better and improve things. But that's only a hope. I'm changing the dashboard weather from "sunny" to "evening and weekend storms"


July 22, 2011 - Higher Level Domains

Added some notes and warnings about registering 3LDs / 4LDs if you don't control the 2LD, on the add key form. Seeing a lot of these lately.


July 19, 2011 - Better

OK I think the total routers graphs are now pretty accurate. Before 0.8.7 they were a little high. Also, the intra-day variation is now higher (1200-1500) than before, that looks right too. The rapid growth that started 3 months ago continues. Now over 12,000 uniques per month.


June 30, 2011 - Adjustments

I adjusted the minimum floodfills from 75 to 110 in releae 0.8.7. This will make the apparent number of routers in the network go up. But I also changed the threshold for re-fetching RouterInfos, which should reduce the number of known peers for most routers. This will make the apparent number of routers in the network go down. I don't know which will "win", so I don't know which way to adjust the forumlas for the graphs yet. Expect some strangeness in the graphs the next few days as the network upgrades and I try to guess a new fudge factor for the formulas.

Update July 1: I just looked at some netdb entries, and the 0.8.6 floodfills are averaging 2600 peers while the 0.8.7 floodfills are averaging 1700 peers. So a big success in reducing the memory usage of the floodfills. Meanwhile, the floodfill count is up from about 90 to about 115, so that's working too.


May 18, 2011 - Adjustments

I adjusted the minimum floodfills from 60 to 75 in releae 0.8.6. At some point I was thinking the minimum number didn't matter any more, that the total would grow to find the natural limit, but that wasn't true. I forgot about periodically adjusting the number and that may have let the floodfills start to become more loaded. So I'll need to remember to keep adjusting the number up each release. Although that will hasten the need to implement iterative lookup.

Anyway, one result is the network totals are a little off as I adjust my forumlas. I don't think we really had 4400 routers today.. I'll need to cross-check with echelon to get the best estimate.


March 9, 2011 - Spread

An important network characteristic that must be reviewed periodically is what I call the "spread", or how the tunnels are distributed throughout the network. Are most of the tunnels going through the few fast routers? Are we effectively using the available bandwidth of the network? Are routers finding and using fast peers, and discovering new ones? Too little spread and traffic is too concentrated. Too much spread and lots of traffic goes through slow peers. The general process is called peer selection and profiling. The best place to see spread is on the tunnel percent by class graph.

The algorithms must be changed carefully, as a mistake or overcorrection can cripple the network. The last major change was in late 2008. In summer 2009, we had a major congestion collapse that was caused by new connection limits, and it was ultimately resolved without modifying the core algorithms.

Since 2008, the tunnel build success rates have improved by a factor of 5 or so. We've gone from 15% success to 75%. This major accomplishment is the result of dozens of tweaks over the years. But one side-effect is that "spread" has reduced dramatically. Since the fastest routers are all accepting most build requests, peers keep using them, and a lot of slightly-slower routers don't get used effectively.

So 0.8.4 contains a slight tweak, meant to slightly increase the spread. It's too early to say for sure, but it looks like it's having an effect. It appears that Class O tunnel percentage is going down in the last few days. Check it out.


March 6, 2011 - Stat Bug

Due to a bug from years ago, but not visible until full stats were turned off by default in release 0.8.3, the client tunnel build expire stat was broken in 0.8.3. I fixed it in 0.8.4, and the network average displayed here will be mostly correct once most of the routers out there upgrade to 0.8.4. Of course, since the success and reject stats are percentages, they have also been wrong (a little high) for the last 6 weeks, but it isn't really apparent by looking at the charts.


March 6, 2011 - Botnet

As some of you may have noticed, there is a botnet out there that is building lots and lots of tunnels (see Feb. 14, Feb. 22-24, and Feb 27-28). The botnet is run by a student at TU Munich and we are in contact with the student's faculty advisor at TUM. At this time, we do not think that the experiments are causing any damage to the network as a whole, and we encourage any security researcher who is investigating our network, as long as it is done responsibly. As our network is still relatively small, it could of course be severely disrupted by a large experiment gone awry. If anybody sees any damage from this research activity, please let us know on IRC #i2p-dev and we will pass our concerns along to the advisor.

Of course, this research may result in the publication of vulnerabilities in I2P, possibly including correlation of eepsites with router IPs. In fact, I expect that. Our network is still small, and has not seen much academic research, so don't rely on it for guaranteed anonymity, as it says on our home page. Also, since we vastly improved our website docuumentation last year, that helps everybody including security researchers understand I2P and where lots of areas for improvement are described. I2P may or may not be many things, but it is definitely a work in progress.


January 1, 2011 - Happy New Year!

Jump stats for 2010: 45,600 jump requests. (29,100 for 2009, 10,400 for 2008)


November 16, 2010

- A new little web lookup and jump form. I might expand the lookup to nicely format the output, and generate b32 and addresshelper links, instead of presenting it in hosts.txt format. I also spiffed up the add key form a little.

I also want to recognize the great work going on at py-i2phosts.i2p and inr.i2p. Finally, a serious alternative.


July 10, 2010

- Thanks to idiots, embargo time is now 72 hours.


June 20, 2010

- Due to intensified host name spamming, TOS violations, and other foolishness, I've doubled the embargo time on the stats.i2p add key form from 24 to 48 hours. This means any new key won't be distributed to others until 48 hours after submission. This gives me more time to review and remove all the junk.

The last time I wrote about attacks and abuse was November 19 (see below). In some ways it's gotten better but in other ways, a lot worse.

As noted in that post, if things get much worse, the service will go away. Until that happens, every time I get pissed off I'm going to add another 24 hours to the embargo time. Eventually it will be so long that the service will be useless anyway.

People are encouraged to start their own hosts.txt services.


Or get used to b32 only.



May 7, 2010

- Floodfill Couunts - The floodfill count range was changed from 45-60 in 0.7.12 to 60-100 in 0.7.13. Since the 0.7.13 release on April 27, the average floodfill count went up by about 15% but the leaseset count went down by almost half, as seen on the floodfill graph page. What happened?

The answer is that we reached another big milestone. My router no longer knows almost all floodfills, so the stats.i2p graphs weren't correctly estimating the number of leasesets by factoring in the number of floodfills. While my netDb shows that there are 55-60 floodfills out there right now, the actual number is probably close to 100. The first milestone was not knowing all the routers. This milestone is not knowing all the floodfills.

Floodfill routers will know more other floodfills than non-floodfill routers will, but even they won't know all. I may try to write a netDb scraper that tries to more accurately assess the size of the network.

One other implication - the time is coming to improve the netDb lookup algorithms. Right now they are kad-like in that they search closest-to-the key; however, they are non-kad-like in that they do not immediately follow the 'referrals' from an unsuccessful lookup. And they don't always follow them at all. I need to also study how the net behaves at midnight UTC, when the routing key changes - this 'scrambles' the whole netDb and thus provides a nightly preview of what the net will look like when we grow 10x.

I've said before that we can probably grow about 5x, or 10K routers total, without much trouble. The issues above point to some things we have to work on to grow beyond that - I think at 25K routers or so things will break without changes. Not to worry, we have time.

Anyway, I adjusted the math for the leaseset graph to try to bring it back in-line with pre-0.7.13 numbers.


April 21, 2010

- Addressbook update intervals - Several of you out there are querying my hosts.txt service newhosts.txt every hour, either because you run an extremely old i2p installation where 1 hour was the default, or you changed it. Nobody needs to check hosts.txt every hour except for, maybe, the other hosts.txt sevices run by tino and sponge. Please change your settings to 12 hours minimum on your SusiDNS Configuration Page. Excessive requests may result in being blocked from all stats.i2p services. Thank you.


February 16, 2010

- With the release of 0.7.11 yesterday, the number of floodfills is increasing and has caused my total routers and leasesets graphs to go a little wild. I'll have to adjust the heuristic to try and make it look right again.

Update Feb. 21: While Kad-among-the floodfills is working great for LeaseSets (see below), clearly there is more work to be done for RouterInfos. The LS count per-floodfill is going down as expected with the increase in floodfills, however the RI count is staying quite high. This is due to two factors: 1) The much longer expiration time for RIs; 2) Everybody wants to talk to the floodfills, so they collect more RIs. So there is more work to be done. The obvious easy fix is to reduce the RI timeout for floodfills. I'll look at whether we can do that without hurting reliability, or something else.


January 23, 2010

- I'm extremely happy to announce that Kademlia-among-the-floodfills is definitely working. Since the 0.7.10 release yesterday, the number of floodfills has stabilized at about 12-13. Since the floodfills only flood to the closest 7, any number above 8 means that floodfills are only storing a portion of the total key space.

In other words, average routers and leasesets per-floodfill is no longer the same as total routers and leasesets in the network. This is confirmed by viewing the floodfill router and leaseset charts for yesterday and today. The drop in average leasesets (which expire quickly) is quite pronounced.

This is the culmination of a huge amount of work dating back to May 2009 with this proposal.

I'll shortly be changing the graph titles on stats.i2p from "Average Floodfill Reported" to "Estimated Total". I'll fix the graphs by multiplying the averages by ((total number of floodfills) / 8) when the total number of floodfills is greater than 8. But it looks like that will overstate things somewhat, especially for router infos that have longer expirations. (The reason is that while a floodfill floods to the closest 7, it doesn't always flood to the same 7, as it is always reclassifying the floodfills into "good" and "bad" based on performance.) So I'll experiment a little until it looks right. For now I will try a denominator of 9 for leasesets and 11 for routerinfos.

I think this vindicates floodfill as an approach that will continue to work for at least the next couple of years, as the net grows to 10,000 routers and beyond. The strength of a large, dynamic, untrusted set of floodfill peers is readily apparent. Note that Tor this week had to do a software release just to change the hard-coded references for two of their directory servers.

I will continue to increase the number of floodfills with each release, until it is essentially all the class O routers, or about 10% of the network. At that point, the load on each individual floodfill will be greatly reduced, and will be independent of network size.

Make no mistake, this is a huge milestone. Never again will we know the size of the network with absolute precision, we can only estimate it now. Also, as the net continues to grow, many of the charts on this site (particularly the totals) will become less and less accurate. I2P is growing up.


January 20, 2010

- The net hit 60% 0.7.9 on Monday, and the clouds broke, we have the best net weather in two weeks, so I'm changing the forecast to "Sunny!". Thanks to everybody for upgrading. Release 0.7.10 is coming soon and I'm changing the floodfill count to 10-15. Those of you still on 0.7.7 or earlier should definitely upgrade, as older routers won't work so well with a lot of floodfills.


January 13, 2010 (updated Jan. 15)

- 0.7.9 is released, but it's too early to say what the impact is, not much of the net has upgraded yet. One other thing though - I increased the number of floodfills from 4-6 to 4-9. This is the very small start of the end of knowing the total number of routers in the network, because each floodfill will only know a portion of the full network.

Each floodfill floods to the 7 floodfills closest to the key. So each key should be stored on 8 floodfills total. Since there will soon be 9 floodfills (the total tends to stay at the top of the range), that means we should theoretically multiply the reported netdb routerinfo and leaseset counts by 9/8. I'm not going to do that for now on the dashboard. In practice, I suspect that the 9th floodfill will find out too. But in the next release, when we raise the limit again...

Also, for the next week, as the net upgrades, the stats will be choppy, since new routers think 9 floodfills is the max but old ones think 6 is the max.

Update - sadly I only increased the max floodfill count and not the min; it appears that we are stull stuck at 4-6 because of it.


January 6, 2010

- We had a big influx of new users starting on Sunday, and the net is struggling. I've updated the forecast on the dashboard to "stormy". The 0.7.9 release next week should improve things. I hope.


Novermber 19, 2009

- Comments on hosts.txt issues:


Novermber 18, 2009

- The old, inefficient, non-cgi hosts files /hosts.txt and /newhosts.txt have been removed to save bandwidth. The only hosts.txt service available here is the cgi version at /cgi-bin/newhosts.txt. As always, it provides the last three six months of hosts. If you recently installed I2P, I recommend that you subscribe to another hosts.txt service as well, to receive hosts older than 6 months.


August 3, 2009

- Looks like 0.7.6 did the trick, we had no congestion collapse this weekend. I've updated the forecast on the dashboard to "sunny"!!!


July 31, 2009

- 0.7.6 is out and with it, the tunnel build stats are now normalized to percentages instead of absolute build counts. Yet one more step in the years-long process of removing netdb data. Percentages gives us the network health data we need, and no more.


July 24, 2009

- The weather continues to improve. Not sure if the cause is less new routers, or less old routers. In any case, I've updated the forecast on the dashboard to "scattered storms".


July 16, 2009

- Today we had our first day without tunnel build collapse in 7 weeks. I'm not declaring victory yet (it appears we had an unusually low number of routers and new routers), but it's good to know that the 0.7.4 and 0.7.5 changes are helping. I expect congestion will return on the weekend. 0.7.6 will, of course, include more changes to address these problems.


June 13, 2009

- 0.7.4 is out, ahead of schedule. Hopefully will start to see network improvement in a few days.

Also, routers now report the average of tx and rx bandwidth for both stats, to give a little more cover to those sending or receiving a lot of data (but not both). I will merge the graphs on stats.i2p at some point.


June 7, 2009

- Updated the Network Weather Forecast on the dashboard to say: Severe storms evenings and weekends. The primary cause is the majority of the peers hitting their connection limits. Symptoms are lost leasesets, high cpu, dropped tunnels, etc. Working hard on a series of last-minute improvements before the 0.7.4 release next weekend. By two weeks from now (the weekend of the 20-21st), we should know how we did, and see how much more there is to do.


May 7, 2009

- ... still tweaking things.


May 2, 2009

- ... and the new router list is gone too.


May 1, 2009

- I think I've fixed most of the issues from the big changes described below. I just turned off rDNS too, which was a big security issue, and not too useful since the real way to look at it is not by TLD but by geoIP. If somebody would like to take a feed of IP's or new IP's and run rDNS or geoIP on it and make pretty charts and graphs, that would be nice. Contact me if you'd like to do this. Or you can parse netdb.jsp yourself. You have one too, remember. Not like mine is anything special.


Apr 30, 2009

- More or less, it's the 4th anniversary of stats.i2p. When I started it, there were maybe a few hundred routers a month in the network. Now there's almost 7000 per month (double what it was 2 months ago) and our growth is really picking up speed. I'm still spending a lot of time on I2P code and that leaves me little time for stats.i2p. The recent cgi-bin crawler fiasco also pointed out some problems.

When I created stats.i2p, there was nothing else like it for i2p. And I barely knew anything about i2p either. I didn't understand 10% of what I was graphing. But it helped out jrandom, and later it helped me out too, as I started contributing code.

However, it also pointed out what was possible in netDb analysis, and how much of what was published really compromised anonymity. Since that time 4 years ago, we've removed probably 80% or more of the stats in the netDb.

Unfortunately many of the scripts are creaky and slow and really can't keep up with the growth of the network. It's running on a pretty fast machine but with only 384MB of RAM... I may be able to steal some RAM from another machine, but the writing is on the wall. The every-5-minute script (increased to 10 min now) is taking about 2 minutes to run, and the nightly script takes over an hour - and there's over 20GB of RRD's on disk.

So a lot of the graphs and scripts are going away. They have to. There's so many stats that aren't even in the netDb any more. The graphs for individual routers - gone. The top 100 uptime, which was always bogus - gone. The sort by TLD - gone. The summary plots with 1000 routers on one graph - gone. I'm not planning on shutting it down completely, we still need it to monitor the network, but a lot of pages are going to go away. They have to or it's going to thrash my hard disk to pieces.

This site was never going to be forever, as we're gradually removing all the stats from the netDb. There's only a few left now. And the ones that remain are less accurate. Uptime is gone, and all the stats are 60 minute averages now. Also, as we continue to reduce in-memory storage of netDb info, it's harder to make comparisons to old stats. The net is growing faster than it appears... we're battling to reduce memory usage to compensate, and that makes the router counts look lower than before.


Apr 19, 2009

- 0.7.2 is out, so I started deleting all the uptime graphs. Also worked on the script to handle the tunnel stat change from 10m to 60m. We'll see if I broke anything.


Apr 5, 2009

- Welcome to all the new users, many from France and Sweden. The network is holding up quite well. All the hard work we put into improving the software in 2008 is paying off, and it appears that we timed our publicity (CLT, gulli interview, Pet-Con) perfectly.

0.7.2 will include two netDb stats changes to improve anonymity. First, the actual uptime will no longer be published, it will be spoofed to 90 minutes. Second, the tunnel build stats will be changed from a 10 minute to a 60 minute stat. We continue to rely heavily on this stat to diagnose network performance and measure the effect of code changes; however, a 60 minute average is sufficient for these needs.


Feb 1, 2009

- Fixed a problem in the scripts with bandwidth for 0.7 routers. Also, almost all floodfills and 75% of all routers have upgraded after only one week. Thanks! Exploration should work well now. I'll have to do some more testing to verify. Tunnel build % still can be very bad at times, especially on the weekends... will have to research further, may need to adjust the profile calculations again.


Jan 10, 2009

- Happy new year! A change introduced in 0.6.5-8 will reduce your number of known routers by 150-300, because it expires routers with introducers much earlier to aid in reachability. As the floodfill routers upgrade (most, presumably, after the next release), expect the "Floodfill Reported Routers" graph, which is an average of all floodfills, to drop from ~1000 to ~700. It is not the apocalypse, don't panic.

By the way, if you happen to know who runs floodfill router uXZW, it would be nice if he upgraded to 0.6.5 at least :)


Nov 03, 2008

- Found a router publishing this crap:
stat_bandwidthReceiveBps.5m = 65,911.56;15,962,495,518,388,668.00;0;0;
stat_bandwidthReceiveBps.60m = 86,166.05;1,330,350,044,038,162.00;0;0;
stat_bandwidthSendBps.5m = 318,060,098.46;17,724,590,277,352,856.00;0;0;
stat_bandwidthSendBps.60m = 98,277.84;1,478,175,289,372,482.50;0;0;
So there is a bug somewhere. In the meantime, fixed up the netdb parsing script and added some RRD limits to remove the bad data from the RRDs. The receive and send bandwidth graphs for the last year are now good again.

Also added some code to the script to handle the upcoming removal of the 5m bandwidth stats in release 0.6.5. And tweaked the total routers plots to smooth them out, so that when a new floodfill shows up and doesn't know too many routers, it doesn't cause a sudden drop in the graph.


Oct 27, 2008

- Localhost netdb links on stats.i2p now link to the new netdb.jsp page on your router that shows only one routerInfo, for example see your own routerInfo. This requires release 0.6.4.

Also, found a bug that was causing us to be stuck at 7 floodfill routers when the target is 4-6. Checked in with 0.6.4-7.

Also, removed the 5m bandwidth stats and the tunnel.buildRequestTime.10m stat in 0.6.4-7, but with code that keeps them until the 0.6.5 release to provide cover to developers, of course. Will have to tweak stats.i2p to switch over to the 60m bandwidth stats, haven't done that yet.

Again I ask the floodfill routers to keep up with the latest release. There is still one floodfill router still running 0.6.2, and one running 0.6.3. Thanks.


Oct 6, 2008

- 0.6.4 is released, and there were no netDb stats changes. Not sure why tunnel.buildRequestTime.10m is still in netDb, maybe that should come out for 0.6.5. It's not used here on stats.i2p.

Also, why do we have 8 floodfill routers. If you enabled it recently, please turn it back off. Thanks.


Aug 24, 2008

- 0.6.3 is released, and I'm deleting the removed stats from stats.i2p (see the June 2 entry below for a list). A row in the dashboard charts was removed also.


Aug 5, 2008

- Working on countermeasures to the recent floodfill flood. See http://zzz.i2p/ for information on the 0.6.2-10 and 0.6.2-11 mtn checkins. More to come, maybe.


July 3, 2008

- In case you missed it, auto-floodfill for class O routers was checked in a few weeks back. Details on the floodfill page.

This doesn't fix all our floodfill problems and vulnerabilities but it should prevent one particular type of disaster (i.e. running out of floodfills).


June 3, 2008

- Also, it was never a secret, but if you don't want to publish any stats for YOUR router, see the instructions on zzz.i2p. Two of the floodfills did this yesterday, which screwed up my floodfill stats, but I fixed it. I'll probably change the i2p code in 0.6.3, so that a floodfill will still publish known routers and known leasesets, even if stat publishing is off, unless there's an objection. Seems reasonable to me anyway.


June 2, 2008

- See the codevoid forum for a discussion of netdb stats, vulerabilities, and benefits. See the index of stats for a list of netdb stats that are included, and those that have already been removed. Proposed for removal in release 0.6.3: Please comment over at the NetDb TODO topic on zzz.i2p.

Also, I missed it, but May was the 3-year anniversary of stats.i2p.


Apr. 26, 2008

- is released. All the 60 second stats, tunnel.testFailedTime, and router.fastPeers were removed in They will be removed from the stats.i2p charts soon.


Mar. 31, 2008

- Charts for the removed stats listed below were taken out of the individual router charts pages. Floodfill pages updated again.


Mar. 16, 2008

- Several stats were removed from the netDb as of release This is part of the plan to lessen the load on the floodfill routers, and make the network more efficient. These stats will eventually be removed from stats.i2p as well, but in the meantime you shouldn't pay much attention to them here, since about 1/3 of the network is now running

Removed stats: router.invalidMessageTime, router.duplicateMessageId, tunnel.batchMultipleCount, tunnel.decryptRequestTime, tunnel.fragmentedDropped, tunnel.testSuccessTime, udp.*


Mar. 15, 2008

- Got rid of one row on the dashboard and moved some things around. Floodfill charts now on the top row.


Mar. 10, 2008

- Added version info to the new routers page.


Feb. 6, 2008

- Added floodfill one week and floodfill one month charts. More floodfill routers have appeared also.


Jan. 21, 2008

- Added -qwS to the stats page for floodfill routers.


Jan. 10, 2008

- Sped up the domain counting script and added a new second-level domain counting script. Need to go through and speed up some of the other scripts.


Jan. 1, 2008

- Happy New Year! Added floodfill-reported total router and leaseset stats to the dashboard. Both HF0j (=dev.i2p) and average are plotted (there are two other floodfill routers at the moment, aAVQ and llOi. llOi is a recent arrival, appearing December 20 2007. Here is a new stats page for floodfill routers.


Dec. 18 2007

- My reading of the 3 Month Send Bandwidth Chart is that the release 9 weeks ago really helped. In particular, the change that sends messages for the same destination out the same tunnel.

So where do we go next? JR's last checkin, minutes after the .30 release, backed out the pushback change for possible anonymity concerns... which I don't fully understand. This could significantly hurt the effectiveness of the consistent-tunnel change.

To fully improve things we should implement receive-message tunnel consistency... but that's harder...

For a little more analysis let's turn to the 1 Year Send Bandwidth Chart. The bandwidth numbers are the highest sustained numbers since June. But the higher rates in June were due to a temporary configuration problem at stats.i2p and so we can ignore that. So before that the highest rates were in early March, and in January. But those rates result from the poor defaults in the streaming lib, which were corrected in the release in mid-March. The key problem before then was a maximum retransmission timeout of 10 seconds. Because round trip times can easily be 20 to 40 seconds, this resulted in massive retransmissions and congestion in the network, and therefore high send bandwidths. increased the max timeout to 45 seconds, which dramatically - and permanently - reduced the high and wasteful traffic.

So in conclusion, I claim that the recent higher bandwidth is "good" traffic, rather than the "bad" traffic seen in January - March of this year.


Oct. 29 2007

- Change the first-seen date format to add the year, for example on the Top 100 Uptime page. If the time is shown instead of the year, it's this year.


Sep. 14 2007

- Probably spoke too soon, minimum build success over last few days has ranged from 45% to 60%, not clear why.

Big hopes for the memory leak fix checked in as -4, please test and report via #i2p. The fix should, of course, reduce memory usage and restarts caused by OOMs.

Some helpful hints on debugging memory issues on i2p posted on the zzz page.


Sep. 10 2007

- The positive effects of the .29 release are becoming apparent. As we know, there are poor performance periods that occur every day late in the day (UTC) when the highest number of transient, low-bandwidth routers are on the network. As shown on the Tunnel Build Success chart, the build success minima at these times has dramatically improved from 40% to ~65% and climbing. This is with only 2/3 of the active network upgraded to date. Why did it take 10 days after the new release to start showing improvement? Good question...


Aug. 25 2007

- Added a separate Peer Capacity Index Page to better see the effect of the .29 release. The peer index stats older than 1 day were not previously available. When most of the network has upgraded, the index should settle in the range 1.5-2.0. Best of all, we should no longer have any periods when the index drops below 0.8, which seems to correlate with really bad network performance. These periods usually occurred at the daily peak time just before midnight UTC when the largest number of transient, low-bandwidth routers are on the net.


Aug. 23 2007

- .29 released, time to watch the stats and see the effects of having almost everybody route tunnels.


Aug. 11 2007

- Had some connectivity problems the last few days, hopefully resolved. Did news.xml bandwidth plea help? Seems like it did...


Jul. 6 2007

- Fixed a problem that led to much lower number of routers reported in June.


Jul. 1 2007

- Checked in a fix for the UDP AlwaysPreferred config setting as This will allow easy switching of SSU vs. NTCP preferences while running so that the relative performance of the two transports can be compared and analyzed.

Mar. 27 2007

- Over a week into the .28 release, (with almost 70% of the network upgraded) a couple of stats show the improvement. Client tunnel build expire percentage (3 month graph here) is way down, especially the peaks. This is from prioritizing tunnel messages. Send bandwitch is staying pretty low as the streaming lib has cut way back on retransmission.

See the zzz blog on Syndie for my thoughts on the next steps to improve performance, which involves favoring SSU over NTCP rather than the current vice-versa.

Mar. 8 2007

- Tunnel build priority changes checked into CVS as a few days ago may help to break the correlation between bandwidth and tunnel build failure. Also something to look forward to are the streaming lib changes checked in as which allows applications such as i2psnark to back off outbound transmissions much much more than was previously the case. Changes to slow down i2psnark's outbound a little more checked in as This completes the 3-part upstream performance project. The full effects won't be seen until after the release but test results look good so far.

Mar. 7 2007

- The old zzz.i2p is now here, not worth running a separate tunnel. Mostly old patches and such.

Jan. 26 2007

- Was searching for the stat that correlated best with tunnel build success. By far the strongest correlation is bandwidth. As seen on the Bandwidth Vs. Build Chart, as bandwidth approaches 20 KBps, tunnel build success drops dramatically. What can we learn from this and how can we make tunnel building more resistant to high traffic loads?

Jan. 16 2007

- Fixed the Domains chart by removing over 2000 old routers, and fixing the linked charts such as .edu which were returning 500 codes.

Dec. 27 2006

- Christmas eve server crash, coming back to life today.

Dec. 14

- Changed cutoff for old data from 1.5h to 3h in an attempt to accurately capture the state of the network. The 1.5h cutoff clearly was eliminating routers that were still on the network. This will cause a huge discontinuity in the data but the effect hopefully will be better information.

Update - the peer capacity chart shows a discontinuity but most of the other stats don't. We'll see after a few days if there are some longer-term effects.

Nov. 4

- HD died, mostly recovered from backups after a day and a half. Yay.

Oct. 27

- Way back in, February '06, tunnel.testSuccessTime.24h and tunnel.testSuccessTime.60m were replaced with tunnel.testSuccessTime.10m. stats.i2p never noticed and the old stats were broken. Fixed today. The stat is still labeled .60m on most web pages (for database compatibility) but it's actually the .10m data.

Ditto udp.ignoreRecentDuplicate.10m, replaced with udp.ignoreRecentDuplicate.60s in Fixed today.

Oct. 26

- Wow. First news in five months. Stats.i2p was down for a week and a half or so earlier this month, sorry about that.

The first modest change to the site in several months is the addition of peer capacity % and "peer capacity index" charts to the dashboard, and a peer capacity totals chart to the daily total routers page. As jrandom explained in the October 3 Status Notes, there are two tiers of routers, sorted by bandwidth limits. Tier A (capacities L, M, N, and O) have bandwidth limits higer than 16 KBps and provide participating tunnels for other routers to use. Tier B (capacity K) has a bandwidth limit less than 16 KBps and does not provide participating tunnels for other routers to use. Obviously, if there are not enough tier A routers compared to tier B routers, tunnel building capacity will suffer. So plotting the percentage of the various capacities can give a good snapshot of the network. And the ratio (L + 2M + 4N + 8O) / (K + L + M + N + O) provides an excellent indication of the tunnel-building capacity of the network. I call this the "peer capacity index". This formula is somewhat arbitrary, but it reflects the fact that successive capacity-letters have double the capacity of the previous letter.

See both the percentages and the raio plotted on the dashboard. dashboard. With some study of these new charts, and comparison to other statistics, particularly tunnel build success percentages, hopefully we can learn some things, such as what minimum ratio is required for good tunnel build percentages.

One other note, the change in, released early October, which disabled dynamic router keys has resulted in a huge drop (50% or so) to the number of routers stats.i2p sees every month, and in the duplicate routers seen. This also results in much lower disk space usage for stats.i2p.

May 14

- Happy birthday stats.i2p. It was fired up about one year ago. The oldest surviving stats date back to early June 2005, you can look at the one-year router totals to see the trends. The network is about 50% bigger than it was a year ago.

May 10

- Added year plots for averages since the data for averages are about 6 months old so the year plots look nice. Links on the averages page.

May 9

- Seems like tunnel reliability goes down in a hurry when client tunnel build success % goes below 20% (the green area of the plot above left). Adding backup tunnels can help, as jrandom said in today's meeting.

April 22

- stats.i2p returns to find a world of "congestion collapse". But appears to have substantially improved the situation.

March 5

- Notice: stats.i2p will be down from late March through mid-April. Sorry for the inconvenience.

February 21

- Now using netDB harvester task via exploratory tunnels (see history.txt for details) which is significantly improving the quality of the data, now at least as good as it was in

February 18

- Upgrade to release has caused a big upheaval in the stats due to the following:

Anyway, new client and exploratory tunnel build stats on the third row of the dashboard. A little rocky at the start as script bugs got fixed but should be stabilizing.

February 1

- Averages and total data are almost 3 months deep so added 3 month views, links on the summaries, averages, and totals page.

January 29

- Running new HTTP/1.1 persistent/pipelined connection code on this server. Of course it should be compatible with all clients. Just fixed a bug with large cgi output (i.e. chunked) which was preventing those pages from displaying. Probably more bugs to be found.

January 11, 2006

- Jrandom has fixed a bug that was contributing to router duplicates - if a router had large clock skew it would create a new identity. This fix will be in Hopefully the number of duplicates will lessen. Especially since stats.i2p just surpassed 5000 routers seen in the previous month. As this requires over 15G of storage for the stats, hopefully the number of routers will eventually go down. If not, the policy will be changing soon. Already ran out of disk space once, don't want it to happen again.

Also, there's a bug in CVS or maybe even in the released code which corrupts the HTTP GET request with one or more prepended NULLs. This causes stats.i2p to return a 405 (bad method). Retry and it should load correctly. Not sure at this point if the problem is on the client or server side.

Summary plots (the ones with a line for each router) changed from 1 day to 12 hour to reduce plot generation time and memory usage.

December 22 2005

- Added some current stats to Top 100 Uptime. A nice way to see network health of those mainstay routers that are up almost all the time.

December 15

- REALLY fixed some problems with the router total and % uptime for previous month Wasn't really fixed on Dec. 7.

Fixed tunnel.corruptMessage.3h.

Summary stats reenabled since netDB expiration fixed.

December 14

- Massive duplicates apparently from (vserver.localhost) (listings here) result from a longstanding and undiagnosed bug in TCP transport. Not a high priority as TCP support will be dropped soon. See also Tino's post on the forums.

December 14

- Jrandom fixed netDB router expiration bug introduced in; you can again see routers listed in red in the 'last update' column on the new routers page. Several stats may be affected by the change. stats.i2p was running code with this bug from Dec. 2 through Dec. 13. Stats for individual routers in particular may show long periods with constant values during this period. Averages may become somewhat more volatile now that this is resolved.

December 11

- Fixed router.duplicateMessageId.24h, udp.ignoreRecentDuplicate.10m, udp.statusDifferent.20m, udp.statusOK.20m, udp.statusReject.20m, udp.statusUnknown.20m, tunnel.corruptMessage.60m, udp.addressTestInsteadOfUpdate.60s, and udp.addressUpdated.60s which were broken since the very beginning. Also shown on the dashboard (middle in the third, and first and middle in the fourth rows).

December 9

- Improved router uptime causing netDB size, and therefore version chart, to grow massively. So changed version chart to only plot routers with data published in netDB less than 6 hours ago. Total dropped from 800+ to ~300. And now should accurately reflect recent netDB, and won't glitch so much when the router restarts.

Big summary plots temporarily disabled until the netDB router expiration is fixed.

The tunnels build patch at zzz.i2p has had a big effect in improving reliability. Please help test the patch. In CVS now.

December 7

- Fixed some problems with the router total and % uptime for previous month

Added highlights for the domains which are contributing a higher-than-average number of new routers at the bottom of the new routers page. Sweden is contributing lots of newcomers at the moment.

December 6 00:00 UTC

- Fixed tunnel.buildSuccess.60m which was broken since the very beginning. Also shown on the dashboard (first in the third row). The actual build success average is about 40%. This is a stat for non-exploratory tunnels. The rate for exploratory tunnels: 1 hop 8.7%; 2 hop 1.5%; 3 hop 0.3%.

December 5

- Daily purging has begun. Database stable at about 3300 routers.

November 27

- stats3k - 3000 routers! 3000 in database after about 29 days. We'll start purging old routers in two more days so we should top out at 3200 or so, as predicted.

November 26

- Now updating IP:Port for routers. Hopefully will cut down on duplicates.

Changed Max Published time for valid data from 2h (which was really 2.99 hours because of rounding) to 1.49 hours. We'll see how that looks. Will remove about 20% of the routers previously included and will make the stats a little more responsive. Even this cutoff may be too high, routers publish to netDB every few minutes.

November 23

- Important - Major changes in average calculations to improve data quality.

Not including in the averages and totals stats from routers whose NetDB was published greater than $old=1.99h ago or from routers whose uptime is less than $up=15m. Changes as of 22:25 UTC. Will make the averages much more responsive to actual network conditions since old data is thrown out, and more accurate since brand-new routers are ignored. Some charts are much higher (now throwing out new routers reporting zero or low numbers) and some charts are much lower (the ancient netDB data were often for routers with high values?).

Routers reporting a value of zero for stats where that clearly makes no sense (bandwidth, processing times, etc.) have been removed from the calculations of averages for those stats, as of 19:20 UTC. This will give a much more accurate calculation.

Don't include a stat in an average if uptime isn't long enough. xyz.60m will always be zero until the router has been up 60m. The router might as well not publish those stats. May be a patch coming.

Average and current legend added to individual router stat charts.

November 20

- New shiny stats "dashboard" with 10-minute auto-refresh! Network-at-a-glance... over 25 stats in 12 compact charts. Also uses the new database. No possibility this could have happened the old way.

November 19

- New, much more efficient method of storing and presenting averages and totals. New database starts today and will not be overwritten, so historical averages will now be available for plotting. It will also stay current with new routers, unlike the old plots which only added new routers once a day. Individual router stat databases were not reset and remain at a 31-day depth.

See new network health page here and total page here (works in progress), based on the new database.

See also shiny new chart/navigator pages for individual stats: averages here and totals here, again based on the new database.

November 17

- Charts for summary, average and total now have average of the average in the legend. Current for the average coming tomorrow. Also working on storing the averages differently so they can be plotted faster and we can go back farther than one day. Coming soon hopefully.

November 15

- 2100 routers after 19 days. Projected to hit about 3200 routers for the entire month, at which point old routers will start being purged, and maintaining a fairly constant 3200-router database after that.

November 14

- Over 2000 routers. Day and week selectors (custom time periods) added to day and week views for individual routers.

November 9

- New Duplicate IP:Port Listing Page. New world map on New Routers page and Router Domain page.

November 2

- wow 1000 routers after only 5 days and averaging over 100 new routers a day (including duplicate IPs). The network is way bigger than two months ago. At this rate I'll need about 10 GB for a month's worth of data. Three day total and average plots have been disabled as they take several minutes to generate due to the large number of routers. Really need to get to the bottom of the duplicates. Maybe a PC issue? Dialup issue?

October 30

- New router version chart. Click here for latest view.

Also, unfortunately, the router is still crashing often, but with the watchdog reenabled, at least it doesn't have to be reset manually. Looks like a spider is going through all the cgi scripts which is really slowing things down and contributing to the crashes. I created a robots.txt file to exclude /cgi-bin so hopefully this will keep it from happening next time.

October 28

- Back after two months offline. I had to stop running I2P in late August because had the saame old 100% CPU problem, but it had gotten worse. It used to die about once a day but with (and maybe .2 as well - can't remember), it would die after about an hour. I just couldn't babysit it constantly restarting the darn thing every hour. So anyway, after a fair amount of time and intervening distractions, stats.i2p is back. Let's see if the CPU problem has been licked. If so I'd like to keep the server up and running. Of course, please allow several days for the stats to stabilize. The stats on individual routers are only kept for a month so we're starting over...

August 17

- UDP stats added to individual routers plots for day and week (not month)

August 15

- 10 new stats added for UDP (including 6 added in the upcoming (summaries and totals only, no individual routers)

August 13

- Back after a month of downtime, and upgraded to Sorry about the downtime. Please allow several days for the stats to stabilize.

July 6

- What is the deal with all the duplicates among New Routers??? (update 8/19: the following links don't have duplicates in them anymore) What is the deal with all the duplicates in proxad.net users ??? What is the deal with these multiple identities??? If your router has assumed multiple identities or you see your IP address listed as a duplicate please join #i2p so we can figure out why it's happening. Shouldn't be several times a day.

June 30

- new domain lookups linked from Router Domain Totals page. Ever wanted to see All the routers in .de ???. New duplicates listed and linked to IP lookup page. Note this multiple identity. On purpose? bug? install problem? To be researched...

June 27

- new Router Domain Totals, also added domains to bottom of New Routers Last 24 Hours.

June 25

- new list New Routers Last 24 Hours, with IP and RDNS (also added to Top 100 Uptime). Also new links to local netDB and profiles.

June 20

- new list Top 100 Routers by Uptime

June 12

- Individual router page now lists index by first letter of router. Descriptions of stats now given on summary and total pages. Now over 31 days since Stats.i2p came up. Now removing routers not seen in 31 days from the database. If the site can stay up for a month (straight), the database should stabilize at about 1500 routers.

June 9

- back after two weeks of downtime. Sorry! New/vanished stats should stabilize after a couple of days.

May 24

- stats.i2p sees its 1000th router after less than two weeks of tracking!!!

May 2005

- stats.i2p open for business