how to change network TTL limit opensuse 13.2 64bit

Apparently systemd no longer reads /etc/sysctl.conf, which is where TTL used to be changed.

Hi,

On 04.04.2016 03:56, gariac wrote:
> Apparently systemd no longer reads /etc/sysctl.conf, which is where TTL
> used to be changed.

And why is that your interpretation, exactly?

Andreas

Well mostly because I edited sysctl.conf, added this line

net.ipv4.ip_default_ttl = 65

booted, and nothing changed.

I spent some time searching the internet regarding this, and I found a tip related to having to run a certain service (I don’t have the link handy) to cause the sysctl.conf file to be read, only to find checking services in Yast that the particular service doesn’t exist.

I assume the switch to systemd made a lot of tips on the interwebs null and void. I tried limiting my search to the last two years and found nothing. I don’t show up at a forum without at least trying to find an answer myself first.

The question posted by AndreasStieger above, does not only mean that you tell what you changed (where you only post what you made it, but not what it was before a cat /path/to/sysctl.conf before and/or after would have been very good), but it also means that you have to show us how you came to the conclusion, as worded in this post, “nothing changed”.

You do not post any command and it’s output that show what “it” was before, nor what “it” was after.

You seem to think that people here are doing what you try to, do all the time. that they daily use the same commands as you do and that thus they automaticaly understand all the steps you skip in your explanation. I can assure you that that is not the case. People here depend complete on what you tell and show. They can not look over your shoulder.

Please try to help people understanding you, only then someone might be able to help you.

I don’t really understand the question but found that at least on Arch linux, that file has been deprecated.

I guess your solution might be to use:

/etc/sysctl.d/99-sysctl.conf

The OP needs to define what he means by “TTL.”

If you’re talking about TCP/IP packets TTL, it’ll likely be over-ridden by standards, not likely modifiable.
It used to be (and still may be) important for IPv4, AFAIK is irrelevant for IPv6 (Time out is determined by number of hops, not time).

There are other TTL, but the OP needs to clarify.

TSU

A quick “apropos” sysctl on this Leap 42.1 system reveals:


> apropos sysctl
_sysctl (2)          - read/write system parameters
ifsysctl (5)         - (unbekanntes Thema)
proc_dostring (9)    - read a string sysctl
sysctl (2)           - read/write system parameters
sysctl (8)           - configure kernel parameters at runtime
sysctl.conf (5)      - sysctl preload/configuration file
sysctl.d (5)         - Configure kernel parameters at boot
systemd-sysctl (8)   - Configure kernel parameters at boot
systemd-sysctl.service (8) - Configure kernel parameters at boot
>

Meaning sysctl is still around with systemd and, it’s an early-boot service:


UNIT FILE                                  STATE
sysctl.service                             static
systemd-sysctl.service                     static

> systemctl status sysctl.service
systemd-sysctl.service - Apply Kernel Variables
   Loaded: loaded (/usr/lib/systemd/system/systemd-sysctl.service; static)
   Active: active (exited) since Mo 2016-04-04 12:29:33 CEST; 3h 49min ago
     Docs: man:systemd-sysctl.service(8)
           man:sysctl.d(5)
  Process: 412 ExecStart=/usr/lib/systemd/systemd-sysctl (code=exited, status=0/SUCCESS)
 Main PID: 412 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/systemd-sysctl.service

> systemctl status systemd-sysctl.service
systemd-sysctl.service - Apply Kernel Variables
   Loaded: loaded (/usr/lib/systemd/system/systemd-sysctl.service; static)
   Active: active (exited) since Mo 2016-04-04 12:29:33 CEST; 3h 49min ago
     Docs: man:systemd-sysctl.service(8)
           man:sysctl.d(5)
  Process: 412 ExecStart=/usr/lib/systemd/systemd-sysctl (code=exited, status=0/SUCCESS)
 Main PID: 412 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/systemd-sysctl.service

>

tsu2 is not incorrect: for any given IP stream the TTL does get changed by the dynamic part of the protocol but, the kernel has default initial TTL values which may or may not be correct for some specific networks.
[HR][/HR]So, now to the Kernel documentation:
https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt


ip_default_ttl - INTEGER
    Default value of TTL field (Time To Live) for outgoing (but not
    forwarded) IP packets. Should be between 1 and 255 inclusive.
    Default: 64 (as recommended by RFC1700)

[HR][/HR]Is this enough help? Or, is more detail needed?

Not sure where I may be incorrect?
I think I said that the setting you mention <does> apply to ipv4 but may not apply to ipv6.

Although many (if not most) settings in /proc/sys/net/ipv4/* also apply to ipv6, it’s not clear exactly what does and or does not (AFAIK no one has actually listed).

To the OP,
I don’t know that testing for a timeout is that easy,
The following page in an article I wrote many years ago (and still applies to all current versions of openSUSE) describes how to persist modified TCP/IP buffer and TCP/IP Congestion Control Algorithm settings in /etc/sysctl.conf which are easy to test (A simple test is as I described in my article, run a Torrent app for awhile with thousands of simultaneous connections before and then after modifying. Intent is to overload your network sockets, actual numbers are YMMV depending on your individual machine).

TSU

To the OP
An addendum to my prior post,
You may want to read the entire article I referenced on tuning TCP/IP.

I don’t know what you’re chasing when you’re trying to modify some kind of TTL,
But, IMO the “better” solution is to change the TCP/IP Congestion Control Algorithm.

As I described in my article, the default enabled algorithm is very conservative, likely meant to ensure performance even on minimal resource hardware (remember, a default install is the same on every machine from antiquated or tiny to enormous servers). The average laptop, desktop or server today is far more powerful, and you may want to allocate a larger portion of your system’s resources to networking, your network load may be heavier or may be subject to something that is not wired fast ethernet (100Mbits/sec).

For anyone is running WiFi, operating as a Server (like torrents or a medium heavy server with multiple tens to hundreds of simultaneous network connections), or use Gigabit or a variety of other types of network connections, you need to read my article explaining the TCP/IP Congestion Control Algorithm.

Each algorithm is a packaged approach to usually (not always) dynamically modifying various TCP/IP parameters including packet TTL, sliding windows (also explained) sizes, more.

TSU

@tsu2: Double negative: You’re NOT incorrect! :wink:

Ah!
:slight_smile:

The double negative!

TSU

There have been threads on the interwebs going around for about two years how cellular companies detect tethering. I use about 1.5G on a 5G limit per month, so as a user, i really don’t care. But as a geek, I thought it would be interesting to bump up my TTL limit by one to compensate for the extra device in the path, so that my tethered notebook would have the same TTL as the phone.

I thought this would be simple, but alas not.

Now I checked the TTL to the phone network by doing a ping, I first did a trace route to find the private network IP so I knew what to ping. Changing the TTL parameter didn’t change the TTL value detected by pinging the network. (I assumed TTL only has one meaning, else I could have spelled out time to live.)

I will try some of the suggestions, but this isn’t exactly critical. However often something you spend an hour trying to debug is trivial for someone who know the secret sauce.

I should have mentioned I’m only using IPV4.

When the (cellular) User Equipment (UE) is used as a router then, the TTL behaviour of any given (routed) IP stream will be heavily influenced by the dynamics of the air (radio) interface: the number of retries needed to get any given IP packet over the air interface will affect the (remaining) TTL value of that IP data stream through the rest of the terrestrial/radio network.
US American colleagues loved to visit us here in Germany because they were able to experience live the behaviour of UEs moving at more than 200 km/h . . . :wink:
The only effect of increasing the initial TTL value of an IP segment used to tether a device to a UE will be to decrease the number ICMP datagrams related to uplink data streams.Some 3GPP definitions:

  • Uplink: An “uplink” is a unidirectional radio link for the transmission of signals from a UE to a base station, from a Mobile Station to a mobile base station or from a mobile base station to a base station.

  • Downlink: Unidirectional radio link for the transmission of signals from a UTRAN access point to a UE. Also in general the direction from Network to UE.

Agreed, Telecommunications is not simple. 3GPP documents related to the “heavy end” of the air and terrestrial interface specifications have content of more than 1000 pages. Moving away from the mobile telephony scene, there’s an awful lot of ITU-T documents in the same league. And all that just because the people involved in that industry wish to continue offering to the people of planet earth the ability to simply enter a code at a UE which when activated will provide a connection to another UE, regardless of where the 2 parties are located on the planet. Putting it another way, the global telecommunications network is the largest robot built to date by human beings . . .Yes, yes: some people will argue that it’s not a robot; they would would prefer to use the term “automatic machine”. I would argue that they should intensively read the 3GPP and ITU-T documents related to operator procedures: the telecommunications maintenance staff merely “influence” the network; the network controls itself; 24 X 7 . . .

Almost: any IP data stream begins with a TTL (or hop limit) value. As that data stream is passed from router to router, each router decreases the TTL value of that given stream. When any router determines that the data stream’s TTL value has been reduced to 0 (zero), it discards the IP packet with the TTL value of 0 and sends an ICMP error datagram in the reverse direction to the IP address which had originated the data stream.
The originating IP address can then apply error correction routines to handle the transmission loss; it can, for example, resend the “dead” IP packet; or, it can inform the human being that the connection to the target IP address is no longer possible . . .

Yes, Public Land Mobile Network (PLMN) operators do have different policies with respect to tethering; some allow or tolerate it, others attempted to prohibit tethering. The current state of the “game” is that some operators are attempting to limit tethering by means of special customer tariffs. The means by which a PLMN detects tethering is Network Equipment (NE) supplier specific. I haven’t yet searched the 3GPP documents but, there is a good chance that only specific NE suppliers offer “tethering detection”, almost certainly by means of a patented algorithm . . .

Doesn’t matter: IPv6 also has this value but, it’s been renamed to “hop limit” . . .

Additional information: RFC 7278 “Extending an IPv6 /64 Prefix from a Third Generation Partnership Project (3GPP) Mobile Interface to a LAN Link” states:3. Requirements for Extending the 3GPP Interface /64 IPv6 Prefix to a LAN Link:
[INDENT=2]R-4: The UE MUST decrement the time to live (TTL) when passing packets between IPv6 links across the UE.
[/INDENT]

The only question is: “Which TTL value does the UE use when IT begins an IP data stream?”
The RFC is clear: a properly behaving UE will decrement the TTL value of the uplink IP packets it receives from a tethered device.

  • In other words, with respect to a tethered device the UE shall behave the same as any other IP router.

But, if the UE itself initiates an uplink IP data stream, which TTL value is IT using? I’m unsure but, I have a feeling that the PLMN normally controls the TTL value that the UEs shall use – it’s just that I can’t find the reference on the 3GPP documents . . . Possibly Alzheimer’s . . .
But, the bottom line is that the UE most probably uses TTL values which are different to the default value defined in the Linux Kernel documentation . . .

Even more information: RFC 2132 “DHCP Options and BOOTP Vendor Extensions” states:4.5. Default IP Time-to-live
[INDENT=2]This option specifies the default time-to-live that the client should use on outgoing datagrams. The TTL is specified as an octet with a value between 1 and 255.
[/INDENT]

So, when a UE connects to (registers with) a PLMN a result of the procedures executed is the specific UE is assigned an IP address by the PLMN via DHCP and, as part of the DHCP procedure the PLMN assigns that UE a default TTL value, which is not necessarily the value specified by the Internet Assigned Numbers Authority (IANA):
<https://www.iana.org/assignments/ip-parameters/ip-parameters.xml>
[HR][/HR]Now to the UE’s tethered device:

  1. The UE may assign by means of DHCP an IP Address and a TTL value to the tethered device.
  2. If the tethered device is a Linux system, then the TTL value assigned by means of DHCP will override the TTL value assigned by the Linux Kernel via sysctl.

[HR][/HR]@gariac: If you wish to override the TTL value assigned to the UE’s tethered device by the last DHCP procedure which had occurred then, you’ll have to use sysctl to assign the TTL value you wish to use before beginning any IP data streams using the newly established tethered connection.
[HR][/HR]Please note that RFC 1700 has been obsoleted (RFC 3232 “Assigned Numbers: RFC 1700 is Replaced by an On-line Database”).

IMO regarding various sub-topics in this thread…

The IPv6 “32 hop limit” strictly speaking isn’t the traditional TTL which is truly measured in “time” milliseconds… It’s a different method to expire a bad route… This difference between IPv4 and IPv6 isn’t trivial, I’ve had IPv6 packets take over 5 minutes to expire which can mean a really frustrated User who waits that long before finding out that his networking to that target is broken.

Regarding tethering,
The User should know that if he is using an Android device, the Android is almost certainly being set up as a proxy device which means that Android is negotiating its networking connection upstream to the ISP completely independent of any settings on your tethered Laptop. See my “Tuning TCP/IP Networking” where I describe this as one of the limitations you need to accept… that you only have control over one or at most the two end points and have no control over what happens between. You can investigate Android tuning, but from what I’ve seen (as a developer) no one but Google has access to those settings. And, since the physical connection between the Android and the tethered device is typically a fairly high quality connection (wired or wireless, a very short distance), no tuning is generally necessary.

As I described earlier,
The best method with best chances of improving poor throughput is to change your TCP/IP Congestion Control Algorithm, followed by enlarging your TCP/IP buffers <only> if you’re operating consistently at the maximum limits of your connection and if you’re transferring very large files (There are other scenarios, but this is the most common for ordinary Users).

TSU

Apropos IPv6: ‘sysctl’ also handles the /proc/sys/net/ipv6/* variables, including “hop_limit”: <https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt>


hop_limit - INTEGER
    Default Hop Limit to set.
    Default: 64

[HR][/HR]Also, note that IPv4 and IPv6 are fundamentally different in their loop detection behaviours:

  • Each IPv4 Router should decrement the TTL value;
  • Each IPv6 Router should increment the hop-count value.

Incorrectly implemented IPv6 Routers can very quickly provoke long (at least 5 minutes) “connection dead” time-outs.

Looks like the folks managing IPv6 are gradually rolling out their own settings, most of the settings I see today did not exist a year ago.

Re: IPv6 hop limit
I’m surprised and not really happily that I see the new default value is 64, doubling the previous 32 which I thought was plenty long enough (Is there really any part of the earth which should require more than a dozen, at most 20 hops to reach unless the ISPs can’t agree on “net neutrality” principles?).

TSU

Maybe they are planing for intergalactic ipv6 communications in advance :slight_smile: