firewalld 1.0 no more direct rules: how to add nat masquerade?

Hello,

I have a linux box which I use as a router for other PC on different VLAN. I needed to add in firewalld 0.9.3 the following direct rules:

ipv="ipv4" table="nat" chain="POSTROUTING" priotity="0" -o ppp0 - j MASQUERADE
passthrough ipv="ipv4" -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

In firewalld I have 2 zones: external with interface ppp0 and home with all other interfaces (eno1, vlan1, vlan2, vlan3 and vlan4).

Last snapshot 20210817 installed firewalld 1.0.0 and after reboot I loss internet connection for all PC on the different VLAN but internet still work for my linux box.
I see in the firewalld change description that “* NAT rules moved to inet family (reduced rule set)” and “* Direct interface is deprecated”.

on https://firewalld.org/blog/ they describe the configurations for tcp_mss_clamp using new policies

# firewall-cmd --permanent --new-policy pppTcpClamp
# firewall-cmd --permanent --policy pppTcpClamp --add-ingress-zone internal
# firewall-cmd --permanent --policy pppTcpClamp --add-egress-zone external 
# firewall-cmd --permanent --policy pppTcpClamp --add-rich-rule='rule tcp-mss-clamp'

but nothing about nat and masquerade.
I applied the firewall_cmd commande about policies for tcp_mss_clamp but when executing the reload I have a very long error message and the systemctl status is

# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2021-08-20 23:04:07 CEST; 2s ago
       Docs: man:firewalld(1)
   Main PID: 29057 (firewalld)
      Tasks: 2 (limit: 4915)
        CPU: 1.363s
     CGroup: /system.slice/firewalld.service
             └─29057 /usr/bin/python3 /usr/sbin/firewalld --nofork --nopid

Aug 20 23:04:07 hpprol2 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 20 23:04:07 hpprol2 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 20 23:04:08 hpprol2 firewalld[29057]: ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not parse integer
                           
                                          JSON blob:
                                          {"nftables": {"metainfo":  {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet",  "table": "firewalld", "name": "mangle_PRE>
Aug 20 23:04:08 hpprol2 firewalld[29057]: ERROR: COMMAND_FAILED:  'python-nftables' failed: internal:0:0-0: Error: Could not parse integer
                                     
                                          JSON blob:
                                          {"nftables": {"metainfo":  {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet",  "table": "firewalld", "name": "mangle_PRE>

The lines under json blob are very big (more than 78KB)

In /etc/firewalld/policies I found file pppTcpClamp.xml files

<policy target="CONTINUE">
<rule>
<tcp-mss-clamp value="None"/>
</rule>
<ingress-zone name="home"/>
<egress-zone name="external"/>
</policy>

I removed the xml file and firewalld restart without error: Strange that firewalld blog is giving a bad command>:(
How can I solve the parse error?

In the man of firewall-cmd I see this remark: “The direct interface has been deprecated. It will be removed in a future release. It is superseded by policies, see firewalld.policies(5).” but there is no description of nat/MASQUERADE in firewalld.policies.

I tried adding a rich rule for ipv4 and element “masquerade” in zone “external”: this rule is accepted but does not solve the internet connection for the PC/vlan
So how can I add the nat/MASQUERADE rule in firewalld for ppp0?

Below the description of my network


 ┌────────────────────────┐                      ┌───────┐
 │ Tumbleweed Server with │ eno3 (no IP)         │ CABLE │
 │ DHCP + DNS + firewalld ├───────────ppp0───────│ Modem ├─── Internet
 │                        │                      │       │
 │ do intervlan routing   │                      └───────┘
 └───┬────────────────┬───┘                   
 eno2 (No IP)     eno1 (192.168.1.120)                
     │                │enslaved in br0 (for VM)       
     │                │                                    
trunk│ port          VLAN 1                           
 ┌───┴────────────────┴──────────────────────────────────┐
 │         TL─SG3216          Swithch Level 2            │
 │                                                       │
 │                             VLAN  ID                  │
 │   4                   2                  3            │
 │(192.168.4.0/24)     (192.168.2.0/24) (192.168.3.0/24) │
 └───┬──────────────────┬──────────────────┬─────────────┘
     │                  │                  │         
     │                  │                  │         
   Samba               PCs                 PCs                  
192.168.4.91      192.168.2.100─     192.168.3.100─        
 raspberry        192.168.2.199      192.168.3.199
Printer 192.168.4.50

Many thanks in advance
Philippe

This doesn’t look right…

<tcp-mss-clamp value="None"/>

When a value is not provided then the maximum segment size is set to path MTU. If you set the value explicitly, does it then get parsed as expected?

What does this feature look like?

This feature adds an enable TCP MSS clamp option to Firewalld rich rules. The user has an option called tcp-mss-clamp in rich rules. The tcp-mss-clamp option takes in an optional operand called value which allows the user to set the maximum segment size. The maximum segment size can be set to pmtu (path maximum transmission unit) or a value greater than or equal to to 536. If the user sets value to pmtu, it sets the maximum segment size to the smallest MTU (maximum transmission unit) of all the nodes between the source and the destination. This is a useful default because the user doesn’t have to manually set the MSS to the smallest MTU in the network path. By setting MSS to pmtu, all packets will be small enough to be able to traverse the network path without being dropped or fragmented.

Thanks,
I recreated the policies using this command for the tcp-mss-clamp

hpprol2:~ # firewall-cmd --permanent --policy pppTcpClamp --add-rich-rule='rule tcp-mss-clamp value="pmtu"'
success

Thereafter firewalld start without error.:slight_smile:

This solve one problem but there is still no internet connection for the PC on the vlan; I’m still missing the nat/MASQUERADE for ipv4.>:(

I forced the installation of firewalld-0.9.3 and python-firewalld-0.9.3 so that I’m back with the old version where the PC on the vlan have internet connection.

Many thanks for your help
Philippe

Maybe this information will be of help?

For example…

firewall-cmd --policy mypolicy --add-masquerade

Masquerade was supported natively by firewalld for as long as I remember; even before policies were introduced.

firewall-cmd --zone=outgoing-zone --add-masquerade

Hello Arvidjaar,

I’m back in firewalld 0.9.3 and can only test your command this afternon but in the gui config tool for firewalld there is a button “Masquerade” for the related zone. I have enabled this and it was not enough to have internet connection. See

https://paste.opensuse.org/images/93821406.png

I’m thinking that your command can be the same? or maybe this is not working in GUI?
I’ll test this a soon as possible

Many thanks
Philippe

Yes, it is the same. It applies masquerading to “external” zone. Whether this is correct we have no idea - you never showed your actual firewalld configuration. I have seen enough posts about firewalld where users tried to configure some zone (based on zone name) while actual zone associated with interface was different. So while configuration was correct, it was never applied.

I applied it to the “external” zone where the ppp0 connection is set. All other connections are in the “home” zone.
I applied this in the 0.9.3 version already but needed to add a direct rule so that the vlan pc have internet connection:
https://paste.opensuse.org/images/28575649.png

Regards
Philippe

Assuming you are using iptables backend show

iptables -L -n -v
iptables -L -n -v -t nat
firewall-cmd --list-all-zones
firewall-cmd --list-all-policies
firewall-cmd --direct --get-all-rules

once without and once with your direct rule. Better after clean restart each time.

Hello,

I use the nftables backend.

# firewall-cmd --get-active-zones
docker
  interfaces: docker0
external
  interfaces: eno3 ppp0
home
  interfaces: vlan1 eno2 vlan2 vlan3 eno1 br0 vlan4

# firewall-cmd --list-all-zones
block
  target: %%REJECT%%
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: 
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

dmz
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: ssh
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: docker0
  sources: 
  services: 
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

drop
  target: DROP
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: 
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

external (active)
  target: default
  icmp-block-inversion: no
  interfaces: eno3 ppp0
  sources: 
  services: ssh
  ports: 
  protocols: 
  forward: no
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

home (active)
  target: default
  icmp-block-inversion: no
  interfaces: br0 eno1 eno2 vlan1 vlan2 vlan3 vlan4
  sources: 
  services: apache2 apache2-ssl dhcp dhcpv6 dhcpv6-client dns dns-over-tls finger ftp http https imap imaps ipp ipp-client irc kdeconnect kerberos kpasswd ldap ldaps libvirt libvirt-tls libvirtd-relocation-server mdns minidlna mountd mysql nfs nfs3 ntp openvpn rpc-bind rsyncd samba samba-client samba-dc sane slp smtp smtps snmp ssh tftp tigervnc tigervnc-https transmission-client vnc-server
  ports: 67/udp 68/udp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule family="ipv4" source address="192.168.2.0/24" destination address="192.168.3.0/24" protocol value="icmp" drop
        rule family="ipv4" source address="192.168.2.0/24" destination address="192.168.3.0/24" protocol value="tcp" drop
        rule family="ipv4" source address="192.168.3.0/24" destination address="192.168.2.0/24" protocol value="icmp" drop
        rule family="ipv4" source address="192.168.3.0/24" destination address="192.168.2.0/24" protocol value="tcp" drop

internal
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client mdns samba-client ssh
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

libvirt
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcp dhcpv6 dns ssh tftp
  ports: 
  protocols: icmp ipv6-icmp
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule priority="32767" reject

nm-shared
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcp dns ssh
  ports: 
  protocols: icmp ipv6-icmp
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule priority="32767" reject

public
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

trusted
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: 
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

work
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client ssh
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Policies

# firewall-cmd --list-all-policies
allow-host-ipv6 (active)
  priority: -15000
  target: CONTINUE
  ingress-zones: ANY
  egress-zones: HOST
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule family="ipv6" icmp-type name="neighbour-advertisement" accept
        rule family="ipv6" icmp-type name="neighbour-solicitation" accept
        rule family="ipv6" icmp-type name="router-advertisement" accept
        rule family="ipv6" icmp-type name="redirect" accept

on firewalld.0.9.3

firewall-cmd --direct --get-all-rules 
ipv4 nat POSTROUTING 0 -o ppp0 -j MASQUERADE
ipv4 filter FORWARD 0 -i vlan -o br0 -j ACCEPT

Setting of the two zones external and home

# firewall-cmd --list-all --zone=home
home (active)
  target: default
  icmp-block-inversion: no
  interfaces: br0 eno1 eno2 vlan1 vlan2 vlan3 vlan4
  sources: 
  services: apache2 apache2-ssl dhcp dhcpv6 dhcpv6-client dns dns-over-tls finger ftp http https imap imaps ipp ipp-client irc kdeconnect kerberos kpasswd ldap ldaps libvirt libvirt-tls libvirtd-relocation-server mdns minidlna mountd mysql nfs nfs3 ntp openvpn rpc-bind rsyncd samba samba-client samba-dc sane slp smtp smtps snmp ssh tftp tigervnc tigervnc-https transmission-client vnc-server
  ports: 67/udp 68/udp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule family="ipv4" source address="192.168.2.0/24" destination address="192.168.3.0/24" protocol value="icmp" drop
        rule family="ipv4" source address="192.168.2.0/24" destination address="192.168.3.0/24" protocol value="tcp" drop
        rule family="ipv4" source address="192.168.3.0/24" destination address="192.168.2.0/24" protocol value="icmp" drop
        rule family="ipv4" source address="192.168.3.0/24" destination address="192.168.2.0/24" protocol value="tcp" drop

# firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: eno3 ppp0
  sources: 
  services: ssh
  ports: 
  protocols: 
  forward: no
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

the nft table and rules

nft list tables
table inet firewalld
table ip firewalld
table ip6 firewalld
hpprol2:/local/download64/rpm_install # nft list table ip firewalld
table ip firewalld {
        chain nat_PREROUTING {
                type nat hook prerouting priority dstnat + 10; policy accept;
                jump nat_PREROUTING_POLICIES_pre
                jump nat_PREROUTING_ZONES
                jump nat_PREROUTING_POLICIES_post
        }

        chain nat_PREROUTING_POLICIES_pre {
                jump nat_PRE_policy_allow-host-ipv6
        }

        chain nat_PREROUTING_ZONES {
                iifname "vlan4" goto nat_PRE_home
                iifname "br0" goto nat_PRE_home
                iifname "eno1" goto nat_PRE_home
                iifname "vlan3" goto nat_PRE_home
                iifname "vlan2" goto nat_PRE_home
                iifname "eno2" goto nat_PRE_home
                iifname "vlan1" goto nat_PRE_home
                iifname "ppp0" goto nat_PRE_external
                iifname "eno3" goto nat_PRE_external
                iifname "docker0" goto nat_PRE_docker
                goto nat_PRE_home
        }

        chain nat_PREROUTING_POLICIES_post {
        }

        chain nat_POSTROUTING {
                type nat hook postrouting priority srcnat + 10; policy accept;
                jump nat_POSTROUTING_POLICIES_pre
                jump nat_POSTROUTING_ZONES
                jump nat_POSTROUTING_POLICIES_post
        }

        chain nat_POSTROUTING_POLICIES_pre {
        }

        chain nat_POSTROUTING_ZONES {
                oifname "vlan4" goto nat_POST_home
                oifname "br0" goto nat_POST_home
                oifname "eno1" goto nat_POST_home
                oifname "vlan3" goto nat_POST_home
                oifname "vlan2" goto nat_POST_home
                oifname "eno2" goto nat_POST_home
                oifname "vlan1" goto nat_POST_home
                oifname "ppp0" goto nat_POST_external
                oifname "eno3" goto nat_POST_external
                oifname "docker0" goto nat_POST_docker
                goto nat_POST_home
        }

        chain nat_POSTROUTING_POLICIES_post {
        }

        chain nat_POST_docker {
                jump nat_POST_docker_pre
                jump nat_POST_docker_log
                jump nat_POST_docker_deny
                jump nat_POST_docker_allow
                jump nat_POST_docker_post
        }

        chain nat_POST_docker_pre {
        }

        chain nat_POST_docker_log {
        }

        chain nat_POST_docker_deny {
        }

        chain nat_POST_docker_allow {
        }

        chain nat_POST_docker_post {
        }

        chain nat_PRE_docker {
                jump nat_PRE_docker_pre
                jump nat_PRE_docker_log
                jump nat_PRE_docker_deny
                jump nat_PRE_docker_allow
                jump nat_PRE_docker_post
        }

        chain nat_PRE_docker_pre {
        }

        chain nat_PRE_docker_log {
        }

        chain nat_PRE_docker_deny {
        }

        chain nat_PRE_docker_allow {
        }

        chain nat_PRE_docker_post {
        }

        chain nat_POST_external {
                jump nat_POST_external_pre
                jump nat_POST_external_log
                jump nat_POST_external_deny
                jump nat_POST_external_allow
                jump nat_POST_external_post
        }

        chain nat_POST_external_pre {
        }

        chain nat_POST_external_log {
        }

        chain nat_POST_external_deny {
        }

        chain nat_POST_external_allow {
                oifname != "lo" masquerade
        }

        chain nat_POST_external_post {
        }

        chain nat_PRE_external {
                jump nat_PRE_external_pre
                jump nat_PRE_external_log
                jump nat_PRE_external_deny
                jump nat_PRE_external_allow
                jump nat_PRE_external_post
        }

        chain nat_PRE_external_pre {
        }

        chain nat_PRE_external_log {
        }

        chain nat_PRE_external_deny {
        }

        chain nat_PRE_external_allow {
        }

        chain nat_PRE_external_post {
        }

        chain nat_POST_home {
                jump nat_POST_home_pre
                jump nat_POST_home_log
                jump nat_POST_home_deny
                jump nat_POST_home_allow
                jump nat_POST_home_post
        }

        chain nat_POST_home_pre {
        }

        chain nat_POST_home_log {
        }

        chain nat_POST_home_deny {
        }

        chain nat_POST_home_allow {
                oifname != "lo" masquerade
        }

        chain nat_POST_home_post {
        }

        chain nat_PRE_home {
                jump nat_PRE_home_pre
                jump nat_PRE_home_log
                jump nat_PRE_home_deny
                jump nat_PRE_home_allow
                jump nat_PRE_home_post
        }

        chain nat_PRE_home_pre {
        }

        chain nat_PRE_home_log {
        }

        chain nat_PRE_home_deny {
        }

        chain nat_PRE_home_allow {
        }

        chain nat_PRE_home_post {
        }

        chain nat_PRE_policy_allow-host-ipv6 {
                jump nat_PRE_policy_allow-host-ipv6_pre
                jump nat_PRE_policy_allow-host-ipv6_log
                jump nat_PRE_policy_allow-host-ipv6_deny
                jump nat_PRE_policy_allow-host-ipv6_allow
                jump nat_PRE_policy_allow-host-ipv6_post
        }

        chain nat_PRE_policy_allow-host-ipv6_pre {
        }

        chain nat_PRE_policy_allow-host-ipv6_log {
        }

        chain nat_PRE_policy_allow-host-ipv6_deny {
        }

        chain nat_PRE_policy_allow-host-ipv6_allow {
        }

        chain nat_PRE_policy_allow-host-ipv6_post {
        }
}

Regards
Philippe

Are you sure? It means packets forwarded to your downstream interfaces will be masqueraded. That is unlikely to be what you want.

nft list tables
table inet firewalld
table ip firewalld
table ip6 firewalld
hpprol2:/local/download64/rpm_install # nft list table ip firewalld

This only shows masquerading rules which are certainly present. The bulk of firewalld rules are in inet table which is missing.

firewall-cmd --direct --get-all-rules
ipv4 nat POSTROUTING 0 -o ppp0 -j MASQUERADE
ipv4 filter FORWARD 0 -i vlan -o br0 -j ACCEPT

Could you test individually whether you need both rules on just one of them?

Right I added it without checking that it was in zone home : removed

This only shows masquerading rules which are certainly present. The bulk of firewalld rules are in inet table which is missing.

The list of table inet is 14400 bytes for version 0.9.3 and 18600 bytes for version 1.0.0:
version firewalld 0.9.3 is on https://paste.opensuse.org/40452023
version firewalld 1.1.1 SUSE Paste

Could you test individually whether you need both rules on just one of them?

Only rule “ipv4 nat POSTROUTING 0 -o ppp0 -j MASQUERADE” is needed for internet connection. This is still listed with firewalld1.0.0 but has no longer effect.
Rule “ipv4 filter FORWARD 0 -i vlan -o br0 -j ACCEPT” was used to to give some vlan pc access to the server.

Regards
Philippe

Do you have time machine? The latest version on firewalld site is 1.0.1.

Anyway. This most certainly contains masquerading rules:

    chain nat_POST_external_allow {
        meta nfproto ipv4 oifname != "lo" masquerade
    }

What it does not contain is forwarding between zones. That is the correct behavior - by definition zone isolates traffic so forwarding is possible only between interfaces in the same zone (like “home” which has “forwarding” enabled), but not between interfaces in different zones.

    chain filter_FWD_**external** {
        jump filter_FORWARD_POLICIES_pre
        jump filter_FWD_external_pre
        jump filter_FWD_external_log
        jump filter_FWD_external_deny
        jump filter_FWD_external_allow
        jump filter_FWD_external_post
        jump filter_FORWARD_POLICIES_post
        **reject with icmpx type admin-prohibited**
    }
    chain filter_FWD_**home** {
        jump filter_FORWARD_POLICIES_pre
        jump filter_FWD_home_pre
        jump filter_FWD_home_log
        jump filter_FWD_home_deny
        jump filter_FWD_home_allow
        jump filter_FWD_home_post
        jump filter_FORWARD_POLICIES_post
        **reject with icmpx type admin-prohibited**
    }

So any packet entering one of these zones and not directed to local host will be rejected. You must use policies to allow forwarding.

This [direct rule] is still listed with firewalld1.0.0 but has no longer effect.
That is correct. With nftables backend direct rule can only be use to block additional traffic - it cannot be use to allow something that is blocked by main firewall configuration.

P.S. first you say version 1.1.1 and now you say version 1.0.0. It does not make following what you are doing easier.

Sorry tip error I have firewalld 1.0.0 Installed

Anyway. This most certainly contains masquerading rules:

    chain nat_POST_external_allow {
        meta nfproto ipv4 oifname != "lo" masquerade
    }

What it does not contain is forwarding between zones. That is the correct behavior - by definition zone isolates traffic so forwarding is possible only between interfaces in the same So any packet entering one of these zones and not directed to local host will be rejected. You must use policies to allow forwarding.

I have very little knowledge about the firewall policies, first time that I must use it :
The direct rules in 0.9.3 used table nat and postrouting to one interface (ppp0). So for the policies needed in 1.0.0 I need defining a new policy, adding ingress and egress zones and a rule for forwarding.
something like

# firewall-cmd --permanent --new-policy AccessInternet
# firewall-cmd --permanent --policy AccessInternet --add-ingress-zone external
# firewall-cmd --permanent --policy AccessInternet --add-egress-zone home
# firewall-cmd --permanent --policy AccessInternet --add-rich-rule='rule family="ipv4" source="192.168.0.0/21" accept'

Do I need using ipv4 or inet? The changes for 1.00 contain “NAT rules moved to inet family”

But for the awswer. Do I need adding policy and rules rules with inverted ingres/egres and destination or is the existing masquerade enough?

# firewall-cmd --permanent --new-policy FromInternet
# firewall-cmd --permanent --policy FromInternet --add-ingress-zone home
# firewall-cmd --permanent --policy FromInternet --add-egress-zone external
# firewall-cmd --permanent --policy FromInternet --add-rich-rule='rule family="ipv4" destination="192.168.0.0/21" accept'

Many thanks in advance
Philippe

If this is intended to allow forwarding from home zone to internet, ingres zone sould logically be “home” (this is zone where packet originates) and egress zone should be “external”.

# firewall-cmd --permanent --policy AccessInternet --add-rich-rule='rule family="ipv4" source="192.168.0.0/21" accept'

Do I need using ipv4 or inet?

As usual “it depends”. If you want to masquerade both IPv4 and IPv6 traffic, you can just leave “family” out. “inet” is not valid in this place.

The changes for 1.00 contain “NAT rules moved to inet family”

Well, if you leave “familiy” out the result will most likely be “inet”.

Do I need adding policy and rules rules with inverted ingres/egres and destination or is the existing masquerade enough?

This should be handled by default rule for connection tracking, where packets in reverse direction are allowed for estabished and related connections.

OK

This should be handled by default rule for connection tracking, where packets in reverse direction are allowed for estabished and related connections.

Many thanks for your help. Internet connection if now working for the PCs on the vlan.:slight_smile: Without your precious help I could not have reestablished the connection. Again many thanks

Regards
Philippe