Discussion:
Client Networking Issues / NIC Lab
Kevin Bowling
2021-04-23 05:22:30 UTC
Permalink
Greetings,

I have been looking into client networking issues in FreeBSD lately.
To summarize the situation, common NICs like Intel gigabit (e1000 aka
lem(4)/em(4)/igb(4)), Realtek (re(4)), Aquantia, and Tehuti Networks
are unmaintained or not present on FreeBSD. The purpose of this
thread is to gauge whether that matters, and if it does what to do. I
believe it is important because we are losing out on a pipeline of new
contributors by not supporting client hardware well. We risk losing
NAS, firewall, and other embedded users which may not be large enough
to negotiate with these vendors for support or have the volume to do
custom BOMs to avoid risky parts. My opinion has been developed after
researching the drivers, Bugzilla, and various internet forums where
end users exchange advice or ask for help where FreeBSD is the
underlying cause.

e1000 is in the best shape, with recent vendor involvement, but covers
20 years of silicon with over 100 chipsets (of which at least 60 are
significant variations). Datasheets are readily available for most of
them, as well as "specification updates" which list errata. There are
chipsets which have been completely broken for several years. More
common, there are cases that lead to user frustration, including with
the most recent hardware. All of the silicon tends to have
significant bugs around PCIe, TSO, NC-SI (IPMI sideband), arbitration
conflicts with ME and more. Intel doesn't patch the microcode on
these, but many of the issues can be worked around in software.
Performing an audit of the driver will take quite a while, and making
and testing changes gives me concern. When we (my previous employer
and team) converted these drivers to iflib, we fixed some of the
common cases for PCIe and TSO issues but only had a handful of chips
to test against, so the driver works better for some and worse or not
at all for others. I have started fixing some of the bugs in
Bugzilla, but I only have a few e1000 variants on hand to test, and I
have an unrelated full time job so this is just occupying limited
spare time as a hobby.

re(4) is in pretty abhorrent state. All of these chips require
runtime patching of the phy (which I believe is a DSP algorithm that
gets improved over time) and mcu code. That is totally absent in
FreeBSD. A vendor driver exists in net/realtek-re-kmod which contains
the fixups and works alright for many users. This driver cannot be
imported into FreeBSD as is. There is a strange use of the C
PreProcessor which blows up compile time and driver size needlessly.
The out of tree driver has a different set of supported adapters, so
some kind of meld is necessary. Realtek does not provide public chip
documentation, I am trying to see if they will grant NDA access to
contributors.

Aquantia has an out of tree driver in net/aquantia-atlantic-kmod. The
code is not currently in a place where I'd like to see it in the tree.
I am not really sure how common these are, the company was acquired by
Marvell which is still producing them as a client networking option
while they have other IP for higher end/speed.

Tehuti Networks seems to have gone out of business. Probably not
worth worrying about.

1) Do nothing. This situation has gone on for a while. Users are
somewhat accustomed to purchasing FreeBSD-specific hardware for things
like SOHO gateways and NAS. A lot of people just revert back to Linux
for client use. OpenBSD seems to have more active contribution around
this kind of thing and works better for common cases so that may be
another exit ramp.

2) Quantify usage data and beg the vendors for help. This might work
for Intel, however these devices have transferred to a client team at
intel that does not plan to support FreeBSD, and intel does not keep
test systems around long enough to meet FreeBSD user's needs. Realtek
is a similar story, I am unsure how long they hold on to test systems
and would probably need technical guidance to work with the FreeBSD
community. Unsure about Marvell, I've never worked with them.

3) Build a NIC lab and focus on building community support. It would
also give the vendors a place to test hardware their labs have purged
(due to IT asset management policies or other bureaucratic blunders).
Set some boundaries like a 15 year window of chipsets which should
cover practical embedded use cases. There are backplane systems
and/or external PCI(e) expansion systems that could be assembled to
house a large number of NICs. It would probably be cheaper than this,
but say a budget of $15000USD is enough to purchase some expansions, a
couple managed switches, and a few dozen common NICs. Community
members may also send in NICs they wish to see supported or undergo
testing. For this to work out long term, there needs to be a quorum
of people interested in collaborating on the issue. There are some
risks around simply setting this up, depending on the configuration,
the bus topology may introduce problems unrelated to the NICs and we'd
probably need some semi-automated device.hints or devctl stuff to keep
from over provisioning system resources (work on a subset of cards at
a time). An interesting extension of this would be a semi-automated
validation setup for subsystem changes (significant driver changes,
iflib, lro, etc).

4) ???

Regards,
Kevin
Thomas Mueller
2021-04-23 08:21:22 UTC
Permalink
from Kevin Bowling:

> I have been looking into client networking issues in FreeBSD lately.
> To summarize the situation, common NICs like Intel gigabit (e1000 aka
> lem(4)/em(4)/igb(4)), Realtek (re(4)), Aquantia, and Tehuti Networks
> are unmaintained or not present on FreeBSD. The purpose of this
> thread is to gauge whether that matters, and if it does what to do. I
> believe it is important because we are losing out on a pipeline of new
> contributors by not supporting client hardware well. We risk losing
> NAS, firewall, and other embedded users which may not be large enough
> to negotiate with these vendors for support or have the volume to do
> custom BOMs to avoid risky parts. My opinion has been developed after
> researching the drivers, Bugzilla, and various internet forums where
> end users exchange advice or ask for help where FreeBSD is the
> underlying cause.

> e1000 is in the best shape, with recent vendor involvement, but covers
> 20 years of silicon with over 100 chipsets (of which at least 60 are
> significant variations). Datasheets are readily available for most of
> them, as well as "specification updates" which list errata. There are
> chipsets which have been completely broken for several years. More
> common, there are cases that lead to user frustration, including with
> the most recent hardware. All of the silicon tends to have
> significant bugs around PCIe, TSO, NC-SI (IPMI sideband), arbitration
> conflicts with ME and more. Intel doesn't patch the microcode on
> these, but many of the issues can be worked around in software.
> Performing an audit of the driver will take quite a while, and making
> and testing changes gives me concern. When we (my previous employer
> and team) converted these drivers to iflib, we fixed some of the
> common cases for PCIe and TSO issues but only had a handful of chips
> to test against, so the driver works better for some and worse or not
> at all for others. I have started fixing some of the bugs in
> Bugzilla, but I only have a few e1000 variants on hand to test, and I
> have an unrelated full time job so this is just occupying limited
> spare time as a hobby.

> re(4) is in pretty abhorrent state. All of these chips require
> runtime patching of the phy (which I believe is a DSP algorithm that
> gets improved over time) and mcu code. That is totally absent in
> FreeBSD. A vendor driver exists in net/realtek-re-kmod which contains
> the fixups and works alright for many users. This driver cannot be
> imported into FreeBSD as is. There is a strange use of the C
> PreProcessor which blows up compile time and driver size needlessly.
> The out of tree driver has a different set of supported adapters, so
> some kind of meld is necessary. Realtek does not provide public chip
> documentation, I am trying to see if they will grant NDA access to
> contributors.

Some re(4) chips work in FreeBSD, some don't. I gave up on FreeBSD 12.x because of re(4) deficiencies.

Sad to have to seek NDA access for an open-source project like FreeBSD.

NetBSD seems to work better.

OpenBSD GPT support is in such condition as to render incompatible with my system.

Haiku, maybe?

My computer with on-motherboard AR9271 wireless chip, dating to 2013, is still waiting for FreeBSD support.

> Aquantia has an out of tree driver in net/aquantia-atlantic-kmod. The
> code is not currently in a place where I'd like to see it in the tree.
> I am not really sure how common these are, the company was acquired by
> Marvell which is still producing them as a client networking option
> while they have other IP for higher end/speed.

> Tehuti Networks seems to have gone out of business. Probably not
> worth worrying about.

> 1) Do nothing. This situation has gone on for a while. Users are
> somewhat accustomed to purchasing FreeBSD-specific hardware for things
> like SOHO gateways and NAS. A lot of people just revert back to Linux
> for client use. OpenBSD seems to have more active contribution around
> this kind of thing and works better for common cases so that may be
> another exit ramp.

> 2) Quantify usage data and beg the vendors for help. This might work
> for Intel, however these devices have transferred to a client team at
> intel that does not plan to support FreeBSD, and intel does not keep
> test systems around long enough to meet FreeBSD user's needs. Realtek
> is a similar story, I am unsure how long they hold on to test systems
> and would probably need technical guidance to work with the FreeBSD
> community. Unsure about Marvell, I've never worked with them.

> 3) Build a NIC lab and focus on building community support. It would
> also give the vendors a place to test hardware their labs have purged
> (due to IT asset management policies or other bureaucratic blunders).
> Set some boundaries like a 15 year window of chipsets which should
> cover practical embedded use cases. There are backplane systems
> and/or external PCI(e) expansion systems that could be assembled to
> house a large number of NICs. It would probably be cheaper than this,
> but say a budget of $15000USD is enough to purchase some expansions, a
> couple managed switches, and a few dozen common NICs. Community
> members may also send in NICs they wish to see supported or undergo
> testing. For this to work out long term, there needs to be a quorum
> of people interested in collaborating on the issue. There are some
> risks around simply setting this up, depending on the configuration,
> the bus topology may introduce problems unrelated to the NICs and we'd
> probably need some semi-automated device.hints or devctl stuff to keep
> from over provisioning system resources (work on a subset of cards at
> a time). An interesting extension of this would be a semi-automated
> validation setup for subsystem changes (significant driver changes,
> iflib, lro, etc).

> 4) ???


Tom
Gary Palmer
2021-04-23 11:27:49 UTC
Permalink
On Thu, Apr 22, 2021 at 10:22:30PM -0700, Kevin Bowling wrote:
> Aquantia has an out of tree driver in net/aquantia-atlantic-kmod. The
> code is not currently in a place where I'd like to see it in the tree.
> I am not really sure how common these are, the company was acquired by
> Marvell which is still producing them as a client networking option
> while they have other IP for higher end/speed.

Aquantia seems to be used in more and more motherboards to provide
>1Gbps network interfaces (2.5Gbps or 10Gbps). Particularly consumer
oriented motherboards

Regards,

Gary
Kyle Evans
2021-04-23 12:46:57 UTC
Permalink
On Fri, Apr 23, 2021 at 12:22 AM Kevin Bowling <***@kev009.com> wrote:
>
> Greetings,
>
> [... snip ...]
>
> Tehuti Networks seems to have gone out of business. Probably not
> worth worrying about.
>

That's unfortunate. I had a box of their 10G NICs and I got them to
put a driver up for review[0][1], but they weren't very responsive and
the existing codebase was in pretty rough shape.

Beyond that, your #3 seems to be the most appealing. #2 could probably
work in the mid-to-long term, but we'd likely be better off
bootstrapping interest with solid community-supported drivers then
reaching out to vendors once we can demonstrate that plan field of
dreams can work and drive some substantial amount of business.

Thanks,

Kyle Evans

[0] https://reviews.freebsd.org/D18856
[1] https://reviews.freebsd.org/D19433
Rick Macklem
2021-04-23 13:19:23 UTC
Permalink
Kyle Evans wrote:
>On Fri, Apr 23, 2021 at 12:22 AM Kevin Bowling <***@kev009.com> wrote:
>>
>> Greetings,
>>
>> [... snip ...]
>>
>> Tehuti Networks seems to have gone out of business. Probably not
>> worth worrying about.
>>
>
>That's unfortunate. I had a box of their 10G NICs and I got them to
>put a driver up for review[0][1], but they weren't very responsive and
>the existing codebase was in pretty rough shape.
>
>Beyond that, your #3 seems to be the most appealing. #2 could probably
>work in the mid-to-long term, but we'd likely be better off
>bootstrapping interest with solid community-supported drivers then
>reaching out to vendors once we can demonstrate that plan field of
>dreams can work and drive some substantial amount of business.

I'll admit to knowing nothing about it, but is using the linuxKPI
to port Linux drivers into FreeBSD feasible?

Obviously, given the size of the Linux community, it seems
more likely that it will have a driver that handles many chip
variants, plus updates for newer chips, I think.

I do agree that having drivers that at least work for the
basics (maybe no Netmap, TSO, or similar) for the
commodity chips would make it easier for new adopters
of FreeBSD. (I avoid the problem by finding old, used
hardware. The variants of Intel PRO1000 and re chips I
have work fine with the drivers in FreeBSD13/14.;-)

Oh, and if TSO support is questionable, I think it would be
better to leave it disabled and at least generate a warning
when someone enables it, if it can be enabled at all.

Good luck with it, rick

Thanks,

Kyle Evans

[0] https://reviews.freebsd.org/D18856
[1] https://reviews.freebsd.org/D19433
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-***@freebsd.org"
Kevin Bowling
2021-04-23 22:12:06 UTC
Permalink
On Fri, Apr 23, 2021 at 6:19 AM Rick Macklem <***@uoguelph.ca> wrote:
>
> Kyle Evans wrote:
> >On Fri, Apr 23, 2021 at 12:22 AM Kevin Bowling <***@kev009.com> wrote:
> >>
> >> Greetings,
> >>
> >> [... snip ...]
> >>
> >> Tehuti Networks seems to have gone out of business. Probably not
> >> worth worrying about.
> >>
> >
> >That's unfortunate. I had a box of their 10G NICs and I got them to
> >put a driver up for review[0][1], but they weren't very responsive and
> >the existing codebase was in pretty rough shape.
> >
> >Beyond that, your #3 seems to be the most appealing. #2 could probably
> >work in the mid-to-long term, but we'd likely be better off
> >bootstrapping interest with solid community-supported drivers then
> >reaching out to vendors once we can demonstrate that plan field of
> >dreams can work and drive some substantial amount of business.
>
> I'll admit to knowing nothing about it, but is using the linuxKPI
> to port Linux drivers into FreeBSD feasible?

Hi Rick,

I did consider this but do not think it makes sense for PCI Ethernet
NIC drivers. I will explain my judgement for consideration. In
complex systems such as an Ethernet driver, there are intrinsic and
extrinsic complexity. The intrinsic properties of an Ethernet driver
are small enough that one person can understand them. So we spend a
lot of time fighting against extrinsic problems that I outlined in my
email. Put in simpler terms, an iflib driver can be written by one
person and there are a number of people that are good at this in the
community. The intrinsic complexity of the LKPI on top of an Ethernet
driver, as well as some license and social problems people have with
LKPI lead it to be a worse fit.

If you apply this to drm+i915 etc it is illuminating why the Linux KPI
is the right approach. The intrinsic properties of the graphics stack
are beyond time and practicality for most in the community, the
graphics drivers have become labyrinths that most kernel devs don't
have internal knowledge of, rival the size of the rest of the kernel,
and keeping up is easier if internal code changes can be kept to a
minimum.

> Obviously, given the size of the Linux community, it seems
> more likely that it will have a driver that handles many chip
> variants, plus updates for newer chips, I think.

I would agree that Linux has a much better Realtek driver. I am
familiar with the Linux e1000 series for instance, and although they
tend to have most the workarounds the quality is a lot lower than most
users realize.

> I do agree that having drivers that at least work for the
> basics (maybe no Netmap, TSO, or similar) for the
> commodity chips would make it easier for new adopters
> of FreeBSD. (I avoid the problem by finding old, used
> hardware. The variants of Intel PRO1000 and re chips I
> have work fine with the drivers in FreeBSD13/14.;-)

Having good inbox network drivers is a way for FreeBSD to
differentiate itself. I like nice drivers like cxgbe(4), it is a
great piece of engineering and to me even artful. Consider some cxgbe
so you can test high speeds :)

> Oh, and if TSO support is questionable, I think it would be
> better to leave it disabled and at least generate a warning
> when someone enables it, if it can be enabled at all.

I would like to preserve and correct TSO and other offloads as much as
possible. There are consequences to half assing it such as burning
more electricity than necessary and causing unnecessary HW
upgrade/replacement. Of course, where we can't deliver, we should
limit the feature set to known good ones. Striking this balance will
require more feedback from the community, with faster turnaround time
on PRs.

> Good luck with it, rick
>
> Thanks,
>
> Kyle Evans
>
> [0] https://reviews.freebsd.org/D18856
> [1] https://reviews.freebsd.org/D19433
> _______________________________________________
> freebsd-***@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-***@freebsd.org"
>
George Neville-Neil
2021-05-06 17:18:49 UTC
Permalink
On 23 Apr 2021, at 18:12, Kevin Bowling wrote:

> On Fri, Apr 23, 2021 at 6:19 AM Rick Macklem <***@uoguelph.ca>
> wrote:
>>
>> Kyle Evans wrote:
>>> On Fri, Apr 23, 2021 at 12:22 AM Kevin Bowling
>>> <***@kev009.com> wrote:
>>>>
>>>> Greetings,
>>>>
>>>> [... snip ...]
>>>>
>>>> Tehuti Networks seems to have gone out of business. Probably not
>>>> worth worrying about.
>>>>
>>>
>>> That's unfortunate. I had a box of their 10G NICs and I got them to
>>> put a driver up for review[0][1], but they weren't very responsive
>>> and
>>> the existing codebase was in pretty rough shape.
>>>
>>> Beyond that, your #3 seems to be the most appealing. #2 could
>>> probably
>>> work in the mid-to-long term, but we'd likely be better off
>>> bootstrapping interest with solid community-supported drivers then
>>> reaching out to vendors once we can demonstrate that plan field of
>>> dreams can work and drive some substantial amount of business.
>>
>> I'll admit to knowing nothing about it, but is using the linuxKPI
>> to port Linux drivers into FreeBSD feasible?
>
> Hi Rick,
>
> I did consider this but do not think it makes sense for PCI Ethernet
> NIC drivers. I will explain my judgement for consideration. In
> complex systems such as an Ethernet driver, there are intrinsic and
> extrinsic complexity. The intrinsic properties of an Ethernet driver
> are small enough that one person can understand them. So we spend a
> lot of time fighting against extrinsic problems that I outlined in my
> email. Put in simpler terms, an iflib driver can be written by one
> person and there are a number of people that are good at this in the
> community. The intrinsic complexity of the LKPI on top of an Ethernet
> driver, as well as some license and social problems people have with
> LKPI lead it to be a worse fit.
>
> If you apply this to drm+i915 etc it is illuminating why the Linux KPI
> is the right approach. The intrinsic properties of the graphics stack
> are beyond time and practicality for most in the community, the
> graphics drivers have become labyrinths that most kernel devs don't
> have internal knowledge of, rival the size of the rest of the kernel,
> and keeping up is easier if internal code changes can be kept to a
> minimum.
>
>> Obviously, given the size of the Linux community, it seems
>> more likely that it will have a driver that handles many chip
>> variants, plus updates for newer chips, I think.
>
> I would agree that Linux has a much better Realtek driver. I am
> familiar with the Linux e1000 series for instance, and although they
> tend to have most the workarounds the quality is a lot lower than most
> users realize.
>
>> I do agree that having drivers that at least work for the
>> basics (maybe no Netmap, TSO, or similar) for the
>> commodity chips would make it easier for new adopters
>> of FreeBSD. (I avoid the problem by finding old, used
>> hardware. The variants of Intel PRO1000 and re chips I
>> have work fine with the drivers in FreeBSD13/14.;-)
>
> Having good inbox network drivers is a way for FreeBSD to
> differentiate itself. I like nice drivers like cxgbe(4), it is a
> great piece of engineering and to me even artful. Consider some cxgbe
> so you can test high speeds :)
>
>> Oh, and if TSO support is questionable, I think it would be
>> better to leave it disabled and at least generate a warning
>> when someone enables it, if it can be enabled at all.
>
> I would like to preserve and correct TSO and other offloads as much as
> possible. There are consequences to half assing it such as burning
> more electricity than necessary and causing unnecessary HW
> upgrade/replacement. Of course, where we can't deliver, we should
> limit the feature set to known good ones. Striking this balance will
> require more feedback from the community, with faster turnaround time
> on PRs.
>
>> Good luck with it, rick
>>
>> Thanks,
>>
>> Kyle Evans
>>
>> [0] https://reviews.freebsd.org/D18856
>> [1] https://reviews.freebsd.org/D19433
>> _______________________________________________
>> freebsd-***@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to
>> "freebsd-net-***@freebsd.org"
>>

On the NIC Lab question, anyone on the project (and some off) can use
the lab we built for high performance networking at Sentex. This lab
has plenty of machines and excellent remote hands:

https://wiki.freebsd.org/TestClusterOnePointers

https://wiki.freebsd.org/TestClusterOneReservations

Folks are welcome to contact me off list for access.

Best,
George
Chris
2021-04-24 00:37:09 UTC
Permalink
On 2021-04-22 22:22, Kevin Bowling wrote:
> Greetings,
>
> I have been looking into client networking issues in FreeBSD lately.
> To summarize the situation, common NICs like Intel gigabit (e1000 aka
> lem(4)/em(4)/igb(4)), Realtek (re(4)), Aquantia, and Tehuti Networks
> are unmaintained or not present on FreeBSD. The purpose of this
> thread is to gauge whether that matters, and if it does what to do. I
> believe it is important because we are losing out on a pipeline of new
> contributors by not supporting client hardware well. We risk losing
> NAS, firewall, and other embedded users which may not be large enough
> to negotiate with these vendors for support or have the volume to do
> custom BOMs to avoid risky parts. My opinion has been developed after
> researching the drivers, Bugzilla, and various internet forums where
> end users exchange advice or ask for help where FreeBSD is the
> underlying cause.
>
> e1000 is in the best shape, with recent vendor involvement, but covers
> 20 years of silicon with over 100 chipsets (of which at least 60 are
> significant variations). Datasheets are readily available for most of
> them, as well as "specification updates" which list errata. There are
> chipsets which have been completely broken for several years. More
> common, there are cases that lead to user frustration, including with
> the most recent hardware. All of the silicon tends to have
> significant bugs around PCIe, TSO, NC-SI (IPMI sideband), arbitration
> conflicts with ME and more. Intel doesn't patch the microcode on
> these, but many of the issues can be worked around in software.
> Performing an audit of the driver will take quite a while, and making
> and testing changes gives me concern. When we (my previous employer
> and team) converted these drivers to iflib, we fixed some of the
> common cases for PCIe and TSO issues but only had a handful of chips
> to test against, so the driver works better for some and worse or not
> at all for others. I have started fixing some of the bugs in
> Bugzilla, but I only have a few e1000 variants on hand to test, and I
> have an unrelated full time job so this is just occupying limited
> spare time as a hobby.
>
> re(4) is in pretty abhorrent state. All of these chips require
> runtime patching of the phy (which I believe is a DSP algorithm that
> gets improved over time) and mcu code. That is totally absent in
> FreeBSD. A vendor driver exists in net/realtek-re-kmod which contains
> the fixups and works alright for many users. This driver cannot be
> imported into FreeBSD as is. There is a strange use of the C
> PreProcessor which blows up compile time and driver size needlessly.
> The out of tree driver has a different set of supported adapters, so
> some kind of meld is necessary. Realtek does not provide public chip
> documentation, I am trying to see if they will grant NDA access to
> contributors.
>
> Aquantia has an out of tree driver in net/aquantia-atlantic-kmod. The
> code is not currently in a place where I'd like to see it in the tree.
> I am not really sure how common these are, the company was acquired by
> Marvell which is still producing them as a client networking option
> while they have other IP for higher end/speed.
>
> Tehuti Networks seems to have gone out of business. Probably not
> worth worrying about.
>
> 1) Do nothing. This situation has gone on for a while. Users are
> somewhat accustomed to purchasing FreeBSD-specific hardware for things
> like SOHO gateways and NAS. A lot of people just revert back to Linux
> for client use. OpenBSD seems to have more active contribution around
> this kind of thing and works better for common cases so that may be
> another exit ramp.
>
> 2) Quantify usage data and beg the vendors for help. This might work
> for Intel, however these devices have transferred to a client team at
> intel that does not plan to support FreeBSD, and intel does not keep
> test systems around long enough to meet FreeBSD user's needs. Realtek
> is a similar story, I am unsure how long they hold on to test systems
> and would probably need technical guidance to work with the FreeBSD
> community. Unsure about Marvell, I've never worked with them.
>
> 3) Build a NIC lab and focus on building community support. It would
> also give the vendors a place to test hardware their labs have purged
> (due to IT asset management policies or other bureaucratic blunders).
> Set some boundaries like a 15 year window of chipsets which should
> cover practical embedded use cases. There are backplane systems
> and/or external PCI(e) expansion systems that could be assembled to
> house a large number of NICs. It would probably be cheaper than this,
> but say a budget of $15000USD is enough to purchase some expansions, a
> couple managed switches, and a few dozen common NICs. Community
> members may also send in NICs they wish to see supported or undergo
> testing. For this to work out long term, there needs to be a quorum
> of people interested in collaborating on the issue. There are some
> risks around simply setting this up, depending on the configuration,
> the bus topology may introduce problems unrelated to the NICs and we'd
> probably need some semi-automated device.hints or devctl stuff to keep
> from over provisioning system resources (work on a subset of cards at
> a time). An interesting extension of this would be a semi-automated
> validation setup for subsystem changes (significant driver changes,
> iflib, lro, etc).
>
> 4) ???
Thank you for your efforts here, Kevin.
I had intended to shoot a similar note regarding the Realtek nics earlier
this week. But got sidelined.
I see a great deal of "bad press" on the internet regarding Realtek nics,
largely *bsd related, but also from may others as well. We ($work) have
had a pretty good experience with them. So much so, that we use them
almost exclusively (on FreeBSD). We've found them to be real performers
for the dollar. So I was going to send out a message that given our stock
of re(4) we could make ourselves available for testing against any work
being done on re(4).
IOW if we can help in any way. We'll make ourselves available.

Thanks again.

--Chris
>
> Regards,
> Kevin
> _______________________________________________
> freebsd-***@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-***@freebsd.org"
Continue reading on narkive:
Search results for 'Client Networking Issues / NIC Lab' (Questions and Answers)
9
replies
who is the owner of the site www.orkut.com?
started 2007-03-20 03:43:20 UTC
internet
Loading...