Beruflich Dokumente
Kultur Dokumente
2
Router Operations
3
Login classes and users
system {
login {
class Ops {
permissions [ clear network view view-configuration ];
}
user NOC {
class Ops;
authentication {
encrypted-password “blahblah”; ## SECRET-DATA
}
}
}
}
Apply groups
groups {
all-atm {
interfaces {
<at-*> {
encapsulation atm-pvc;
}
}
}
}
interfaces {
apply-groups all-atm;
}
Result:
interfaces {
at-0/0/2 {
encapsulation atm-pvc;
}
}
Inherited commands will only show up with this command:
show interfaces | display inheritance | except ##
Basic interfaces
Permanent interfaces:
fxp0 is for OOB management.
fxp1/bcm0 connects the RE to the PFE.
fxp2 connects the REs together.
Examples:
interfaces {
fxp0 { # OOB port. J-Series OOB is 0/0/0.
unit 0 {
family inet {
address 192.168.10.25/24;
}
}
}
lo0 { # This is the only loopback in Junos.
unit 0 {
family inet {
address 1.1.1.1/32;
}
}
}
at-0/0/0 {
atm-options {
vpi 0 {
maximum-vcs 200;
}
}
unit 0 {
vci 0.100; # VPI.VCI
family inet {
address 1.0.0.0/30;
}
}
}
4
ge-1/0/0 {
vlan-tagging; # Remove this and vlan-id for IPoE.
unit 100 {
vlan-id 100;
family inet {
address 2.0.0.1/24;
}
}
}
so-2/0/0 {
encapsulation frame-relay;
unit 100 {
dlci 100;
family inet {
address 3.0.0.1/30;
}
}
}
ct3-3/0/0 {
partition 1 interface-type ct1;
partition 2-28 interface-type t2;
}
ct1-3/0/0:1 {
partition 1 timeslots 1-10 interface-type ds;
partition 2 timeslots 11-23 interface-type ds;
}
ds-3/0/0:1:1 {
description “First ds0 channel bundle of ct1-3/0/0:1”;
unit 0 {
family inet {
address 4.0.0.1/30;
}
}
}
t1-3/0/0:2 {
encapsulation cisco-hdlc;
unit 0 {
family inet {
address 5.0.0.1/30;
}
}
}
se-4/0/0 {
serial-options {
clocking-mode dce; # Remove on DTE side.
}
unit 0 {
family inet {
address 6.0.0.1/30;
}
}
}
}
5
LAG interfaces
chassis {
aggregated-devices {
ethernet {
device-count 2; # Creates ae0, ae1...
}
}
}
interfaces {
ge-0/0/0 {
gigether-options {
802.3ad ae0;
}
}
ge-1/0/0 {
gigether-options {
802.3ad ae0;
}
}
ae0 {
aggregated-ether-options {
lacp {
active;
}
}
unit 0 {
family inet {
address 1.2.3.4/24;
}
}
}
}
Logging
Syslog
system {
syslog {
host 1.2.3.4 {
any any; # Equivalent to facility debug.
}
}
}
To a file:
system {
syslog {
file All-Syslog-Alerts {
any alert;
explicit-priority;
archive {
files 10;
size 1m;
world-readable;
}
}
} 6
}
SNMP
snmp {
community Public {
authorization read-only;
clients {
1.2.3.0/24;
}
}
trap-group FisherCo-SNMP-Traps {
version v2;
categories {
link;
}
targets {
1.2.3.1;
}
}
trap-options {
source-address lo0;
}
}
show snmp mib walk jnxOperatingDescr
7
Protocol-independent routing
Static routes
routing-options {
static {
route 1.2.3.0/24 next-hop 192.168.10.1; # or "next-hop [ 1.1.1.1 2.2.2.1 ]" for ECMP.
route 0.0.0.0/0 discard; # or "reject" to send ICMP unreachables.
route 2.3.4.0/24 {
next-hop 192.168.10.1 resolve; # "resolve" enables recursive next-hop lookups.
qualified-next-hop 192.168.20.1 { # Allows independent preferences.
preference 300; # Can be up to 4,294,967,295. For a floating static route.
}
}
}
}
Aggregate routes
Create a black-hole route for the specified range when a more-specific route exists. Routing protocols can reference the route.
routing-options {
aggregate {
route 1.2.3.0/24; # Default next-hop is reject. Can be set to discard.
}
}
show route <prefix> exact detail # Displays contributing routes.
Generated routes
Create this route when a more-specific route exists. It inherits the lowest-preference or -numbered contributing route’s next hop.
routing-options {
generate {
route 1.2.3.0/24; # Appears as an aggregate route with an A.D. of 130.
}
}
Martians
routing-options {
martians {
127.0.0.0/8 orlonger allow; # Override default martian address.
10.0.0.0/8 orlonger; # Create a new entry that should never be in inet.0.
}
}
show route martians [table <routing-table>]
Routing instances
routing-instances {
FisherCo-VR { # Treat this as the VR root. Routing protocols go under it.
instance-type virtual-router; # VR, VRF, etc.
interface ge-1/0/0.0; # Assign L3 IFLs, such as lo0.1, to the instance.
}
}
show route (instance|table FisherCo-VR.inet.0)
show interfaces terse routing-instance FisherCo-VR
(ping|traceroute) 1.2.3.4 routing-instance FisherCo-VR
Communication between routing instances requires either "lt" (logical tunnel) interfaces or a cable looped between two ports. The
latter solution wastes ports, whereas an "lt" interface can use an existing services PIC.
interfaces lt-0/0/0 {
unit 0 {
encapsulation ethernet; # An "lt" interface is point-to-point only, regardless of encapsulation.
peer-unit 1;
family inet {
address 192.168.0.1/30;
}
}
unit 1 {
encapsulation ethernet;
peer-unit 0;
family inet {
address 192.168.0.2/30;
}
}
}
routing-instances {
FisherCo-VR-0 {
interface lt-0/0/0.0;
}
FisherCo-VR-1 {
interface lt-0/0/0.1;
}
}
9
High Availability
GR
GR helper mode is on by default. Enable GR restarting-router mode globally and then disable it per-protocol.
routing-options {
graceful-restart; # Set "graceful-restart disable" for specific protocols or neighbors.
}
protocols {
bgp {
graceful-restart {
disable; # It is still enabled for OSPF, IS-IS, PIM-SM, RIP, etc., if applicable.
}
}
}
show bgp neighbor <peer-ip> | find Options # "GracefulRestart" should be there.
Monitor with the graceful-restart traceoptions flag.
GRES
Enable GRES:
chassis {
redundancy {
graceful-switchover;
}
}
show system switchover # On the backup RE only.
request chassis routing-engine master [acquire|release|switch]
NSR
Preserves control plane (routing protocols) during an RE switchover. Disable GR, and enable GRES and NSR together.
routing-options {
nonstop-routing;
}
show (task|bgp) replication
request routing-engine login other-routing-engine # Get into the other RE, and...
show (route|ospf neighbor|isis adjacency|bgp summary) # ...verify NSR is functioning.
BFD
Master router:
interfaces {
ge-2/0/0.0 { # On the same LAN as the backup router.
unit 0 {
family inet {
address 21.10.10.2/24 { # Backup router’s IP could be 21.10.10.3/24.
vrrp-group 100 { # Copy this substructure to backup router.
virtual-address 21.10.10.1/24;
accept-data; # Make virtual-address pingable.
priority 150; # Set lower on the backup router (i.e. 110).
track-interface ae0.0 { # If this router’s uplink goes down...
priority-cost 50; # ...reduce priority to lose mastership.
}
}
}
}
}
}
}
show vrrp [summary|detail|extensive]
11
Policies
Routing policies and firewall filters (IP policies) have the same structure: Name, Terms, Match Conditions, and Actions.
Routing policy
Firewall filter
firewall {
family inet {
filter Generic-Input-Filter {
term Restrict-Telnet-SSH {
from {
protocol tcp;
destination-port [ telnet ssh ];
source-address {
10.0.0.0/8;
}
}
then {
count Telnet-SSH; # Can also “log” and then “show firewall log”.
accept;
}
}
term Allow-All {
then accept; # Can be dangerous if preceding terms do not secure the router.
}
}
}
}
set ge-0/0/0 unit 0 family inet filter input Generic-Input-Filter;
set ge-0/0/0 unit 0 family inet filter output Generic-Output-Filter;
show firewall filter generic-input-filter
Firewall filters all end with an implicit deny-all: set term last-term then discard; 12
If a match condition is true and no action is specified, then the packet is accepted.
JUNOS match logic
The policy and filter matching algorithms use && (AND) logic when the match criteria are different:
from {
protocol static;
route-filter 1.2.3.0/24 exact;
}
then accept;
Logic: If (protocol == static) && (prefix == 1.2.3.0/24) then accept.
Other than with a route filter, the policy and filter matching algorithms use || (OR) logic when the match criteria are the same:
from {
protocol [ static direct ];
}
then accept;
Logic: If (protocol == static) || (protocol == direct) then accept.
The policy and filter matching algorithms use longest-match logic when implementing multiple route filters:
from {
route-filter 1.2.3.0/24 orlonger;
route-filter 1.2.3.0/26 orlonger; # This filter will match 1.2.3.0/30, since it is more specific.
}
then accept;
Configuration order only matters if the route filters’ prefixes and prefix lengths are identical.
CBF
policy-options {
policy-statement CBF-1-2-Gold-10GE {
term network-1-2 {
from {
route-filter 1.2.0.0/16 orlonger; # Only these routes are affected.
}
then {
cos-next-hop-map CBF-Gold-Map;
}
}
}
}
class-of-service {
forwarding-policy {
next-hop-map CBF-Gold-Map {
forwarding-class Gold {
next-hop 1.2.1.2; # Gold takes the direct route.
}
forwarding-class Best-Effort {
next-hop 5.4.3.2; # Best-effort takes the scenic route.
}
}
}
}
routing-options {
forwarding-table {
export CBF-1-2-Gold-10GE;
}
}
Load balancing
policy-options {
policy-statement Load-Balance-All {
then { # Add a "from" statement to affect only some traffic.
load-balance per-packet; # "per-packet" means "per-flow" on IP-II ASICs.
}
}
}
routing-options {
forwarding-table {
export Load-Balance-All;
}
}
show route forwarding-table [matching <prefix>]
13
Layer-four load balancing
Junos includes SA, DA, protocol, and ingress interface index for its default load-balancing hash. To add the layer-four variables of
source and destination port, specify hashing on both layers:
forwarding-options {
hash-key {
family inet {
layer-3;
layer-4;
}
}
}
FBF
Ingress FBF is roughly equivalent to PBR (Policy-Based Routing). Ingress FBF sends traffic to a forwarding instance in which one or
more static routes exist. A RIB group enables next-hop lookup in the forwarding instance.
14
RIP
By default, JUNOS can receive RIPv1 and RIPv2 routes but only sends RIPv2 routes. RIP doesn’t scale, and it easily causes a domain-
loop issue with multiple IGP redistribution points. Know RIP, but don’t use it.
protocols {
rip {
authentication-type md5;
authentication-key “blahblah”; ## SECRET-DATA
group All-RIP-Neighbors {
neighbor ge-0/0/0.0 metric-in 2; # Default RIP metric is 1.
neighbor ge-1/0/0.0;
}
}
}
show rip neighbor [<ifl>]
show rip statistics [<ifl>]
Logging
protocols {
rip {
traceoptions {
file RIP-Update-Log;
flag update;
}
}
}
file copy /var/log/rip-update-log server1:rip-update-log-2010-11-30
clear log rip-update-log
15
OSPF
16
IS-IS
Minimum steps: 1) Define interfaces in IS-IS and their levels. 2) Enable ISO on the interfaces. 3) Configure a NET address on lo0.
protocols {
isis {
export [ Level2-Leak Export-Direct Export-Static ];
overload timeout 60;
reference-bandwidth 100g;
level 2 {
authentication-type md5;
authentication-key “blahblah”; ## SECRET-DATA
prefix-export-limit 500; # Maximum prefixes sent to LSDB. Be careful with this command.
}
interface ge-0/0/0.0; # Both levels by default.
interface ge-1/0/0.0 {
level 2 disable;
mesh-group 1; # Do not reflood LSPs from this link’s neighbor to mesh group 1.
priority 0; # 0 == Never become DIS. Default is 64.
level 1 {
metric 50; # Default is 10.
wide-metrics-only;
}
}
interface ge-2/0/0.0 {
mesh-group blocked; # Send no LSPs to the neighbor on this link.
}
interface lo0.0; # Auto-passive.
}
}
interfaces {
lo0 {
unit 0 {
family iso {
address 49.0020.1921.6801.9001.00;
}
}
}
ge-0/0/0 {
unit 0 {
family iso; # Configure on each IS-IS interface.
}
}
}
show isis interface [<ifl>] [detail]
show isis adjacency [detail]
clear isis adjacency <hostname>
show isis database [detail|extensive] [level 1|level 2] [LSP ID]
show isis (route|spf log|statistics)
show route protocol isis
A policy to leak a level-2 route into level 1 to increase routing efficiency when an area has multiple level-2 exit points:
policy-options {
policy-statement Level2-Leak {
term Leak-10-Subnets {
from {
protocol isis;
level 2;
route-filter 10.0.21.0/24 orlonger;
}
to {
protocol isis;
level 1;
}
then {
accept;
}
}
}
}
17
BGP
routing-options {
autonomous-system 1234 loops 1; # “loops”: A received route with 1234 in its path once is not filtered.
router-id 10.10.0.1;
}
protocols {
bgp {
export Export-Static; # Groups and neighbors inherit unless there are more-specific policies.
authentication-key-chain Merger-Keys;
group AS7654-Peers {
type external;
peer-as 7654;
neighbor 20.0.31.2; # Peer’s interface address, unless multihop is configured.
neighbor 20.0.40.1 {
advertise-peer-as; # Advertise routes even from this peer’s AS to this peer.
advertise-inactive; # Advertise the best BGP route, even if it’s not active in inet.0.
local-address 10.10.0.1; # Required for multihop.
multihop ttl 1; # Remember to create a route to the peer’s lo0.
}
neighbor 20.0.32.2 {
local-as 5678 private; # Uses 5678 for this neighbor only, but “private” does not prepend 5678.
local-as loops 1; # A received route with 5678 in its path once is not filtered.
as-override; # Overwrite the peer’s ASN in the path upon export. Be careful of loops.
metric-out 100; # Options: <metric>, igp [<offset>], and minimum-igp [<offset>].
}
authentication-key “blah”;
remove-private; # Remove private AS numbers when exporting.
local-preference 200; # Can set at global, group, or peer level and in policies at each level.
multipath; # Usually set this under specific neighbors instead.
hold-time 45; # Dead time. Default is 90 sec. Keepalive timer == hold-time/3.
family inet {
unicast {
prefix-limit {
maximum 350000;
teardown 95 idle-timeout 60; # Syslog warnings at 95% of maximum. Tear down for 60 min.
}
}
}
}
group IBGP-Peers {
export Next-Hop-to-Self; # Cancels the inheritance of all higher export policies.
type internal;
local-address 10.10.0.1; # lo0.
neighbor 10.10.20.1 passive; # Do not initiate this BGP session, but allow the peer to initiate it.
neighbor 10.10.30.1 {
export Export-Direct; # Cancels the inheritance of all higher export policies.
}
allow 10.10.40.0/24; # Allow peer-initiated neighborships with any IP in this range.
}
}
}
security {
authentication-key-chains {
key-chain Merger-Keys {
key 1 { # Create each key here.
secret MyPassword;
start-time 2012-09-21.10:11:00;
}
}
}
show bgp neighbor [<neighbor-ip>]
show bgp summary
show route receive-protocol bgp [<neighbor-ip>] [<destination-ip>] ...
show route advertising-protocol bgp [<neighbor-ip>] [<destination-ip>] ...
show route protocol bgp
show bgp group
show route hidden extensive
18
Next hop self (export), next hop peer (import)
Next hop self is useful when exporting routes from eBGP peers to iBGP peers to ensure next-hop reachability.
policy-options {
policy-statement Next-Hop-to-Self {
term 1 {
from {
protocol bgp;
route-type external;
}
then {
next-hop self;
}
}
}
}
Implement next hop peer when importing from eBGP peers who advertise routes with unreachable next hops.
policy-options {
policy-statement Next-Hop-to-Peer {
term 1 {
then {
next-hop peer-address;
}
}
}
}
policy-options {
policy-statement Higher-Local-Pref-to-PE1 {
from {
route-filter 1.2.3.0/24 exact;
}
then {
local-preference 200; # 100 is default.
accept;
}
}
}
policy-options {
policy-statement Prepend-AS-Path-3 {
term 1 {
then {
as-path-prepend “1234 1234 1234”;
}
}
}
}
policy-options {
as-path Traversed-AS65432 “.* 65432 .*”; # Remember to master AS path regex operators (not character-based).
policy-statement Filter-FisherCo-Private {
term Filter-AS65432 {
from {
as-path Traversed-AS65432;
}
then reject;
}
}
}
Communities (import/export)
policy-options {
community AS65432 members 65432:100; # Master the community regex operators (character-based).
community AS123xx members “123[0-9][0-9]:(10|15|20)”;
community Wildcard members “*:*”; # Represents all communities.
policy-statement AS65432-Import {
term 1 {
from {
protocol bgp;
as-path From-AS65432;
}
then {
community delete AS123xx;
community add AS65432; # “set AS65432” would remove all other communities.
community add no-export; # Other options: no-advertise or no-export-subconfed.
next policy;
}
}
then community delete Wildcard; # An unnamed final term that deletes all communities.
}
}
Origin (export)
policy-options {
policy-statement Export-IGP-Origin {
term 1 {
from {
protocol bgp;
origin incomplete;
}
then {
origin igp;
}
}
}
}
MED (export)
policy-options {
policy-statement Export-MED {
term 1 {
from {
route-filter 1.2.3.0/24 exact;
}
then {
metric 100; # Options: <metric>, igp [<offset>], and minimum-igp [<offset>].
}
}
}
}
20
Route-flap damping (import)
Per RIPE-378, route-flap damping is uncommon nowadays due to fast router CPUs.
Define damping
policy-options {
damping Normal-Damping {
suppress 6000; # High threshold. Each withdraw and update is worth 1,000 merit points.
half-life 20; # Default is 15 min.
reuse 3000; # Default is 750. Low threshold.
max-suppress 30; # Default is 60 min. before routes are restored.
}
damping No-Damping {
disable;
}
}
Create a policy
policy-options {
policy-statement Import-Damp-AS65432 {
term 1 {
from {
route-filter 30.30.0.0/16 exact;
}
then {
damping Normal-Damping;
next policy;
}
then damping No-Damping;
}
}
}
Enable damping
protocols {
bgp {
damping;
}
}
protocols {
bgp {
group Ext-AS65432 {
import Import-Damp-AS65432;
}
}
}
21
Route reflection
There should be at least two route reflectors per cluster for redundancy. On clients in the cluster, only peer with the route
reflectors. (It is possible and sometimes useful to create a full iBGP mesh within the cluster, but add “no-client-reflect” to the
route reflectors in that situation.) Routers outside of the cluster only peer with the route reflectors. Clients originating a route
automatically add the originator ID attribute, and route reflectors automatically add to the cluster ID attribute for all routes they
receive. On the route reflectors, configure:
protocols {
bgp {
cluster 10.10.0.1; # Usually lo0, but the cluster ID can be any four-octet number.
}
}
Redundancy is especially important with hierarchical reflection. Each cluster’s route reflectors should be clients of another cluster’s
route reflectors; this is considered an upper level in the hierarchy.
Confederations
Divide the AS up into sub-AS confederations. Each iBGP peer in a confederation will have this configuration:
routing-options {
autonomous-system 65501; # This router’s sub-ASN (confederation).
confederation 1234 members [ 65501 65502 65503 ]; # The main ASN is 1234.
}
On the border routers between confederations:
protocols {
bgp {
group To-SubAS-65502 {
type external;
export Next-Hop-Self; # This is eBGP and thus requires a next-hop-self policy.
multihop; # Since it’s treated as eBGP, even though it’s not.
local-address 10.10.0.1; # Ditto.
neighbor 10.20.0.1;
peer-as 65502;
}
}
}
4-byte ASNs
routing-options {
autonomous-system 1.10; # AS-dot notation “1.10” = plain-number format “65546”.
}
protocols {
bgp {
group 4-Byte-ASN-Neighbors {
neighbor 10.40.0.1 {
peer-as 12345.12345;
}
}
group 2-Byte-ASN-Neighbors {
local-as 65432 private;
neighbor 10.40.0.1 {
peer-as 12345;
}
}
}
}
show bgp neighbor [<neighbor-ip>] # Look at the first line of the output.
22
MPLS
Configure MPLS on all transit, ingress, and egress interfaces which will be processing labels:
interfaces {
xe-4/0/0 {
unit 0 {
family mpls;
}
}
}
LDP
RSVP
protocols {
mpls {
interface xe-4/0/0.0; # Could be “interface all” on a core router.
interface fxp0.0 {
disable;
}
auto-policing { # Police transit RSVP-TE LSPs.
class all drop; # Or apply a policer to an LSP instead.
}
}
rsvp {
interface xe-4/0/0.0 {
authentication-key “blahblah”; ## SECRET-DATA – Per-IFL rather than per-session.
}
interface fxp0.0 {
disable;
}
traceoptions {
file RSVP-log;
flag error;
}
}
ospf {
traffic-engineering; # On by default in IS-IS.
}
} 23
show rsvp (session|interface [<ifl>]) [detail]
RSVP-TE LSPs
Static LSPs
Ingress LSR:
protocols {
mpls {
static-label-switched-path FisherCo-Static-LSP {
ingress {
next-hop 20.20.20.1;
to 30.30.30.1;
push 50507; # Optional.
}
}
}
}
show route table mpls.0
Transit LSR:
protocols {
mpls {
static-label-switched-path FisherCo-Static-LSP {
transit {
next-hop 21.21.21.1;
swap 70102; # Optional. Use "pop" action on the PHP LSR.
}
}
}
}
24
P2MP LSPs
protocols {
mpls {
label-switched-path Sub-LSP-to-PE2 {
to 30.30.30.1;
p2mp FisherCo-TV-LSP; # Associate each endpoint with the same P2MP LSP.
}
... and so on, for each endpoint.
}
}
routing-options {
static {
route 224.10.10.25 {
p2mp-lsp-next-hop FisherCo-TV-LSP; # Route the multicast group to the P2MP LSP.
}
}
multicast {
interface ge-0/0/0.0; # Forward multicast traffic received on this IFL.
}
}
CSPF
Administrative groups:
protocols {
mpls {
admin-groups { # Admin-groups must be identical domain-wide.
Gold 1;
Silver 2;
...and so on.
}
interfaces xe-0/0/0.0 {
admin-group [ Gold Silver ]; # Do for each applicable IFL.
}
label-switched-path PE1-to-PE2 {
primary PE1-to-PE2-Primary {
admin-group {
include-any [ Gold Silver ];
include-all [ Gold Platinum ];
exclude [ Lead Rust ]; # Evaluates as TRUE if exclusions are absent.
}
}
}
}
}
show mpls interface [<ifl>]
Logic: If (include-any(Gold || Silver)) && (include-all(Gold && Platinum)) && (exclude(Lead || Rust)) then CSPF with this link.
Reoptimization:
protocols {
mpls {
optimize-aggressive; # Optional. Considers IGP metric only.
label-switched-path PE1-to-PE2 {
optimize-timer 300; # Seconds. Randomized to prevent synchronization.
}
}
}
clear mpls lsp optimize # Trigger manually.
clear mpls optimize-aggressive # Ditto.
25
Link and node protection
26
VPNs
Verification
show route table bgp.l3vpn.0 [detail] [<ip-prefix>] # Routes with matching targets in any VRF.
show route table <vrf-name>.inet.0
show route table mpls.0 protocol vpn
show route forwarding-table vpn <vrf-name>
show bgp summary
clear arp vpn <vrf-name>
1. Connect the CE routers to their PE routers. Do not create layer 3 interfaces on the PE side:
interfaces {
ge-0/0/2 {
encapsulation vlan-ccc; # “extended-vlan-tcc” for IP interworking.
vlan-tagging;
unit 500 {
vlan-id 500;
encapsulation vlan-ccc; # “vlan-tcc” for IP interworking.
family ccc;
}
}
}
2. Set up an IGP on the PE and P routers.
3. Create all iBGP neighborships.
4. Configure MPLS and LDP/RSVP on the PE and P routers. Configure LSPs between PEs manually, if applicable.
5. On both PE routers, add the l2vpn address family.
protocols {
bgp {
group IBGP-Peers {
family l2vpn {
signaling;
}
} 27
}
}
6. Configure the routing instances on each PE router.
routing-instances {
FisherCo-VPN {
instance-type l2vpn;
interface ge-0/0/2.500;
route-distinguisher 65509:2;
vrf-target target:65509:100;
protocols {
l2vpn {
encapsulation-type ethernet-vlan; # Or “interworking”, etc.
site FisherCo-CE {
site-identifier 1;
interface ge-0/0/2.500 {
remote-site-id 2;
}
}
}
}
}
}
show bgp neighbor <address>
show l2vpn connections
show route table FisherCo-VPN
show route table (mpls.0|bgp.l2vpn.0)
1. Connect the respective CE routers to their PE routers and only configure layer-2 encapsulation on the interfaces:
interfaces {
ge-0/0/2 {
encapsulation vlan-ccc;
vlan-tagging;
unit 500 {
vlan-id 500;
encapsulation vlan-ccc; # Or “vlan-tcc” for IP interworking (removes L2 header).
family ccc;
}
}
}
2. Set up an IGP on the PE and P routers.
3. Configure MPLS and LDP on the PE and P routers. (RSVP isn’t supported without LDP tunneling or a PSN tunnel endpoint.)
4. On both PE routers, configure the layer-2 circuits.
protocols {
l2circuit {
neighbor 10.10.50.1 { # All circuits for this target PE router go here.
interface ge-0/0/2.500 { # The layer-2 interface on this PE.
virtual-circuit-id 2500; # Both sides match.
}
}
}
}
show l2circuit connections
show ldp (neighbor|database)
show route table inet.3
CCC
28
Stitching an L2 VPN to an L2 Circuit
Stitching prevents the need for tunnel services when looping traffic between two VPNs on a single router.
interfaces {
iw0 { # “Interworking” interface.
unit 0 {
encapsulation vlan-ccc; # Must match with peer-unit.
mtu 1500; # Ditto. Can find the MTU with traceoptions.
vlan-id 567; # Ditto.
peer-unit 1; # iw0 IFLs point to each other.
}
unit 1 {
encapsulation vlan-ccc;
mtu 1500;
vlan-id 567;
peer-unit 0;
}
}
}
L2 circuit:
protocols {
l2iw; # Enable L2 interworking. Mandatory.
l2circuit {
neighbor 23.0.0.1 {
interface iw0.1 {
virtual-circuit-id 1234; # Needs to match neighbor router’s VC ID.
}
}
}
}
L2 VPN:
routing-instances FisherCo-VPN2 { # An “l2vpn”.
interface iw0.0;
protocols {
l2vpn {
site FisherCo-Stitch1 {
site-identifier 2;
interface iw0.0 {
remote-site-id 1;
}
}
}
}
}
VPLS
1. Connect CE devices, or switches, to each PE and use LAG, load balancing, STP, ERP, or another workaround if needed to prevent
loops. Do not configure layer-3 interface parameters on the PEs. Example PE interface for a VLAN-specific VPLS:
interfaces {
ge-0/0/1 {
vlan-tagging;
encapsulation vlan-vpls; # Use “ethernet-vpls” and add “family vpls” for full-port VPLS.
unit 567 { # Also remove all IFLs for full-port VPLS.
encapsulation vlan-vpls;
vlan-id 567;
family vpls;
}
}
}
2. Set up an IGP on the PE and P routers.
3. Create a full iBGP mesh while enabling “family l2vpn signaling” between all VPLS PEs.
4. Configure MPLS and LDP or RSVP on the PE and P routers’ interfaces. For RSVP, configure a full mesh of LSPs between the PEs.
Continued below…
29
5a. For a BGP VPLS: Establish the VPLS instance on each PE. Assign the global route distinguisher:
routing-instances FisherCo-VPLS {
instance-type vpls;
interface ge-0/0/1.567;
interface ge-0/0/2.567;
vrf-target target:65432:567;
route-distinguisher 24.0.0.1:1; # If multihomed, must match on the CE’s other home PE.
protocols {
vpls {
no-tunnel-services; # Similar to “vrf-table-label”; no tunnel services needed.
site-range 10;
site FisherCo-CE-1 {
site-identifier 50; # If multihomed, must match on the CE’s other home PE.
multi-homing; # Only if applicable.
site-preference 100; # Use if multihomed to avoid loops.
active-interface primary ge-0/0/1.567; # If dual-homed. Use “any” parameter for non-reversion.
interface ge-0/0/1.567;
interface ge-0/0/2.567;
}
label-block-size 4; # Labels per MP-BGP advertisement. 8 is default.
mac-table-size 200 packet-action drop; # Maximum MAC entries in this VPLS.
interface-mac-limit 100 packet-action drop; # Ditto, but a per-interface limit.
}
forwarding-options {
family vpls {
flood {
input BUM-150k-Policer; # Rate-limit broadcast and multicast traffic.
}
}
}
}
routing-options {
route-distinguisher-id 24.0.0.1;
autonomous-system 65432;
}
5b. For an LDP VPLS: Establish the VPLS instance on each PE. Configure MPLS signaling:
routing-instances FisherCo-VPLS {
instance-type vpls;
interface ge-0/0/1.567; # Add more interfaces to the instance if multihomed.
protocols {
vpls {
vpls-id 200; # Identifies this VPLS on all applicable PEs.
neighbor 25.0.0.1 { # Example primary forwarding path.
switchover-delay 5000; # Wait 5 seconds prior to failover.
revert-time 5; # Seconds after primary path recovers.
backup-neighbor 26.0.0.1 {
standby; # Make-before-break path redundancy.
}
}
}
}
}
protocols {
ldp {
interface lo0.0;
}
}
6. Optional. If P2MP LSPs are needed, then create them dynamically:
routing-instance FisherCo-VPLS {
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
}
show vpls (connections|statistics|mac-table|flood) [extensive]
clear vpls mac-table
show route table <instance> [extensive]
show route table mpls.0
show route forwarding-table family vpls
30
Stitching an L2 VPN to a VPLS
L2VPN instance:
routing-instances FisherCo-L2VPN {
interface lt-3/0/10.1;
protocols {
l2vpn {
site FisherCo-VPN-Stitch {
site-identifier 50;
interface lt-3/0/10.1;
}
}
}
}
VPLS instance:
routing-instances FisherCo-VPLS {
interface lt-3/0/10.0;
protocols {
vpls {
site FisherCo-CE-1 {
site-identifier 50;
interface lt-3/0/10.0;
}
}
}
}
1. Create a hybrid interworking routing instance with BGP and LDP signaling configuration commands.
2. Separate LDP neighbors into a non-default mesh group:
routing-instances FisherCo-VPLS-BGP-LDP-Stitch {
protocols {
vpls {
mesh-group FisherCo-LDP-Mesh {
vpls-id 200;
neighbor 25.0.0.1;
}
}
}
}
31
NG MVPN
1. Configure the BGP/MPLS VPN between the source PE and all receiver PEs.
2. Enable MVPN BGP signaling:
protocols {
bgp {
family inet-mvpn {
signaling;
}
}
}
3. Optionally specify a P2MP LSP template:
protocols {
mpls {
label-switched-path FisherCo-MVPN-Template {
template;
link-protection; # May also specify bw, path requirements, etc.
p2mp;
}
}
}
4a. Create the I-PMSI provider tunnel, if applicable:
routing-instances FisherCo-PE-VRF { # “vrf-table-label” prevents the need for tunnel services.
provider-tunnel {
rsvp-te {
label-switched-path-template {
FisherCo-MVPN-Template; # Or “default-template”.
}
}
}
}
4b. Create the S-PMSI provider tunnel, if applicable:
routing-instances FisherCo-PE-VRF {
provider-tunnel {
selective {
group 224.7.7.0/24 {
wildcard-source {
rsvp-te {
label-switched-path-template {
FisherCo-MVPN-Template; # Or “default-template”.
}
}
}
}
}
}
}
5. Set up multicast and MVPN in the VRF:
routing-instances FisherCo-PE-VRF {
protocols {
pim {
rp {
static { # Use “local” on the RP and “static” on non-RPs.
address 192.168.10.5;
}
}
interface all {
mode sparse;
}
}
mvpn {
mvpn-mode {
spt-only; # Or “rpt-spt”.
}
}
}
}
show pim join instance <routing-instance> [extensive]
show multicast route [extensive] instance <routing-instance> # See if traffic is flowing.
show route table bgp.mvpn.0 # List all MVPN routes.
show route table FisherCo-PE-VRF.mvpn.0 # Query MVPN routes for a specific VPN.
show rsvp session # View P2MP LSP status.
show route forwarding-table destination 224.7.7.7 [extensive] # Verify multicast uses the P2MP LSP.
32
Interprovider VPN, Option C
33
Carrier-of-Carriers VPN - ISP as the Customer
1. Create separate iBGP meshes in each site and in the service provider network.
2. Establish an MPLS mesh in the provider network.
3. Configure MPLS, but not LDP or RSVP signaling, on the CE1-to-PE1 and CE2-to-PE2 links.
interfaces {
ge-0/0/0 { # CE1-to-PE1 link.
unit 0 {
family mpls;
}
}
}
protocols {
mpls {
interfaces ge-0/0/0.0;
}
}
4. Connect the customer sites through an L3VPN between PE1 and PE2.
5. Enable eBGP neighborships with the “labeled-unicast” address family between CE1 and the VRF in PE1 and between CE2 and the
VRF in PE2.
routing-instances {
FisherCo-ISP {
protocols {
bgp {
group FisherCo-ISP-Peers {
family inet {
labeled-unicast;
}
}
}
}
}
}
6. On CE1, export loopback host routes from Site 1 to PE1. Do the same on CE2 to send loopback host routes from Site 2 to PE2.
7. Use a multi-hop ASBR1-to-ASBR2 eBGP neighborship to distribute external routes.
34
Carrier-of-Carriers VPN - VPN Service Provider as the Customer
1. Create separate iBGP meshes in each site and in the service provider network.
2. Establish an MPLS mesh within all three provider networks.
3. Configure MPLS, but not LDP or RSVP signaling, on the links between VPN-CE1 and PE1 as well as between VPN-CE2 and PE2.
interfaces {
ge-0/0/0 {
unit 0 {
family mpls;
}
}
}
protocols {
mpls {
interfaces ge-0/0/0.0;
}
}
4. Connect the VPN provider sites through an L3VPN between PE1 and PE2.
5. Enable eBGP neighborships with the “labeled-unicast” address family between VPN-CE1 and the VRF in PE1 and between VPN-CE2
and the VRF in PE2.
routing-instances {
FisherCo-VPN-Provider {
protocols {
bgp {
group FisherCo-VPN-Provider-Peers {
family inet {
labeled-unicast;
}
}
}
}
}
}
6. On VPN-CE1, export loopback host routes from VPN provider site 1 to the VRF in PE1. Do the same with VPN-CE2 and PE2.
7. Use a multi-hop ASBR1-to-ASBR2 eBGP neighborship to distribute customer site VPN routes.
8. Add the “labeled unicast” address family to the iBGP meshes in both VPN provider sites. ASBRs need the “resolve-vpn” parameter.
protocols {
bgp {
groups IBGP-Peers {
family inet {
labeled-unicast { # All VPN provider routers.
resolve-vpn; # Only on the ASBRs.
}
}
}
}
}
9. Connect the customer sites through an L3VPN between ASBR1 and ASBR2.
35
CoS
Traffic classification
Code-point aliases map traffic class names to DSCP bit patterns. Modification is not needed to configure a classifier. Some defaults:
class-of-service {
code-point-aliases {
dscp {
be 000000;
ef 101110; # DSCP 46
af11 001010;
af21 010010;
nc1 110000;
...and so on.
}
}
}
show class-of-service code-point-aliases inet-precedence
Behavior Aggregate Classifiers examine CoS bits in packet headers, assign packets to specific classes, and set packets’ loss priority
value to high or low. By default, the ipprec-compatability classifier is used on all interfaces.
class-of-service {
classifiers {
dscp Sample-DSCP-CoS-Classifier {
import default;
forwarding-class Best-Effort {
loss-priority high code-points 000000;
}
forwarding-class Gold {
loss-priority high code-points 010010;
}
forwarding-class Platinum {
loss-priority low code-points 100100;
}
forwarding-class Network-Control {
loss-priority low code-points 110110;
}
}
}
}
show class-of-service interface (IFL)
show class-of-service classifier type inet-precedence
Multi-field classifiers: Fields other than the DSCP bits can be used to classify traffic via a firewall filter.
firewall {
filter Set-FC-to-Gold {
term Match-a-Route {
from {
destination-address {
10.10.10.0/24;
}
}
then {
forwarding-class Gold;
accept;
}
}
term Accept-All {
then accept;
}
}
}
Then apply the filter to the appropriate interface, so all incoming traffic on that interface matching the traffic-classifying term of the
firewall filter will be classified:
interfaces {
ge-0/1/0 {
unit 0 {
family inet {
filter {
input Set-FC-to-Gold;
}
}
}
}
}
show interface filters
Fixed classifiers
A fixed classifier considers all traffic on an interface to be part of one traffic class. This can be used for tiered data-only plans:
class-of-service {
interfaces {
ge-0/1/1 {
unit 0 {
forwarding-class Gold;
}
}
}
}
Queueing
Drop-profile
Queueing-related parameters include drop profiles, schedulers, and queue servicing. A drop profile is one part of implementing RED
(Random Early Detection). Drop profile variables include queue fullness and drop probability. Queue fullness is the proportion of the
currently stored result cells to the total result-cell storage capacity allocated for that queue. Drop probability is the percentage of
probability that a packet is dropped.
A drop profile can be segmented or interpolated. A segmented drop profile’s drop probability is graphed as a step function, also
known as a piecewise constant function:
class-of-service {
drop-profiles {
Segmented-Style-Profile {
fill-level 25 drop-probability 25;
fill-level 50 drop-probability 50;
fill-level 75 drop-probability 75;
fill-level 95 drop-probability 100;
}
}
}
37
The above configuration is equivalent to the following step function:
drop-probability = (fill-level) = 0 if 0 ≤ fill-level < 25
25 if 25 ≤ fill-level < 50
50 if 50 ≤ fill-level < 75
75 if 75 ≤ fill-level < 95
100 if 95 ≤ fill-level < 100
An interpolated drop profile’s drop probability is graphed as an increasing monotonic function passing through the x-y coordinates
specified in the fill-level (x) and drop-probability (y) sets, respectively:
class-of-service {
Interpolated-Style-Profile {
interpolate {
fill-level [ 50 75 ];
drop-probability [ 25 50 ];
}
}
}
Note that the default drop profile’s drop probability is zero until the queue is 100% full, thus dropping excess traffic by preventing it
from being queued at all.
Example low- and high-drop profiles:
class-of-service {
drop-profiles {
High-Drop {
interpolate {
fill-level [ 25 50 ];
drop-probability [ 50 90 ];
}
}
Low-Drop {
interpolate {
fill-level [ 75 95 ];
drop-probability [ 10 40 ];
}
}
}
}
A drop-profile-map is created under a traffic class’ scheduler to map a drop-profile to a specific loss-priority and protocol:
class-of-service {
schedulers {
Best-Effort-Scheduler {
drop-profile-map loss-priority low protocol any drop-profile Low-Drop;
drop-profile-map loss-priority high protocol any drop-profile High-Drop;
}
}
}
A scheduler defines the properties of a queue, including the queue priority, drop profiles, the amount of outgoing interface
bandwidth assigned to the queue, and the size of the memory buffer allocated for result cells.
Queue bandwidth can be configured as a constant value, a percentage, or the remainder of the bandwidth available after other
queues’ bandwidth have been calculated.
class-of-service {
schedulers {
Best-Effort-Scheduler {
transmit-rate remainder;
}
Network-Control-Scheduler {
transmit-rate 1m exact;
}
Gold-Scheduler {
transmit-rate percent 15;
}
Platinum-Scheduler {
transmit-rate percent 25;
}
}
}
The transmit-rate bandwidth limit is not enforced in the absence of congestion unless the “exact” parameter is also specified.
38
Scheduler - Queue buffer size
Methods of configuring the buffer size include setting the percentage of total memory, specifying the largest tolerable delay, in
microseconds, during which a packet may be queued, or using the remainder of the available memory.
class-of-service {
schedulers {
Best-Effort-Scheduler {
buffer-size remainder;
}
Network-Control-Scheduler {
buffer-size percent 5;
}
Gold-Scheduler {
buffer-size percent 50;
}
Platinum-Scheduler {
buffer-size temporal 200; # 200 microseconds
}
}
}
A queue’s priority can be low or high. High-priority queues are serviced first when there is congestion. Strict-high traffic is always
considered to be in-profile, and is always serviced first, which can starve non-high-priority queues during times of congestion.
class-of-service {
schedulers {
Best-Effort-Scheduler {
priority low;
}
Network-Control-Scheduler {
priority high;
}
Gold-Scheduler {
priority low;
}
Platinum-Scheduler {
priority high;
}
}
}
Here are the finished schedulers, including all of the above parameters:
class-of-service {
schedulers {
Best-Effort-Scheduler {
transmit-rate remainder;
buffer-size remainder;
priority low;
drop-profile-map loss-priority low protocol any drop-profile Low-Drop;
drop-profile-map loss-priority high protocol any drop-profile High-Drop;
}
Network-Control-Scheduler {
transmit-rate 1m exact;
buffer-size percent 5;
priority high;
drop-profile-map loss-priority low protocol any drop-profile Low-Drop;
drop-profile-map loss-priority high protocol any drop-profile High-Drop;
}
Gold-Scheduler {
transmit-rate percent 15;
buffer-size percent 50;
priority low;
drop-profile-map loss-priority low protocol any drop-profile Low-Drop;
drop-profile-map loss-priority high protocol any drop-profile High-Drop;
}
Platinum-Scheduler {
transmit-rate percent 25;
buffer-size temporal 200;
priority high;
drop-profile-map loss-priority low protocol any drop-profile Low-Drop;
drop-profile-map loss-priority high protocol any drop-profile High-Drop;
}
} 39
}
Scheduler map
Schedulers must then be associated with forwarding classes and, finally, applied to an interface:
class-of-service {
scheduler-maps {
Sample-CoS-Scheduler-Map {
forwarding-class Best-Effort scheduler Best-Effort-Scheduler;
forwarding-class Network-Control scheduler Network-Control-Scheduler;
forwarding-class Gold scheduler Gold-Scheduler;
forwarding-class Platinum scheduler Platinum-Scheduler;
}
}
}
class-of-service {
interfaces {
so-0/1/2 {
scheduler-map Sample-CoS-Scheduler-Map;
}
}
}
Hierarchical scheduling
Interface sets
Traffic-control profiles
class-of-service {
traffic-control-profiles {
Hierarchical-Set-45m {
shaping-rate 45m;
guaranteed-rate 20m;
}
Hierarchical-VLAN-15m {
scheduler-map Sample-CoS-Scheduler; # This only goes on top of the hierarchy.
shaping-rate 15m;
guaranteed-rate 7m;
}
}
}
interfaces {
ge-0/1/3 {
hierarchical-scheduler; # A port may also have a shaping rate.
}
}
Apply appropriate traffic-control profiles to each port, interface set, and VLAN:
class-of-service {
interfaces {
ge-0/1/3 {
excess-bandwidth-share proportional 15000000; # Set to highest queue transmit rate.
interface-set Pipe-45m {
excess-bandwidth-share proportional 15000000;
output-traffic-control-profile Hierarchical-Set-45m;
}
unit 10 { # Configure each VLAN’s profile.
output-traffic-control-profile Hierarchical-VLAN-15m;
}
}
} 40
}
Policers / rate-limiting
A less common tri-color marking policer. Note that “rate” is measured in bps and “size” is in bytes. The CIR is the average bit rate
allowed, and the PIR is the maximum bit rate allowed. Use single-rate or two-rate, not both.
firewall {
three-color-policer 1m-Policer-TCM {
single-rate {
committed-information-rate 1m;
committed-burst-size 50k; # If x < CBS, then low PLP.
excess-burst-size 75k; # If CBS < x < EBS, then medium-high PLP.
} # If x > EBS, then high PLP.
two-rate {
committed-information-rate 1m; # If x < CIR, then low PLP.
committed-burst-size 50k; # If CIR < x < PIR, then medium-high PLP.
peak-information-rate 1500k; # PIR >= CIR.
peak-burst-size 25k; # If x > PIR, then high PLP.
}
}
}
class-of-service {
tri-color; # On by default on M120 and MX.
}
Unlike back in Junos 6.x, policers can now be applied directly to an interface.
Rewrite rules
Rewrite rules set the value of the CoS bits in a packet’s header, based on traffic class and loss priority, right before it is transmitted
from the outgoing FPC. This standardizes and accurately communicates the packet’s CoS information to the rest of the network.
class-of-service {
interfaces {
ae0 {
unit 0 {
rewrite-rules {
dscp Sample-DSCP-CoS-Rewrite-Rule;
}
}
}
}
}
show class-of-service interface ae0
41
Security
VTY authentication
system {
authentication-order [ radius tacplus password ];
radius-server {
192.168.200.1 secret “blahblahblah”; # SECRET-DATA
192.168.200.2 secret “whateverwhatever”; # SECRET-DATA
}
tacplus-server {
192.168.100.1 secret “blahblahblah”; # SECRET-DATA
192.168.100.2 secret “whateverwhatever”; # SECRET-DATA
}
}
Preventing spoofing
Unicast Reverse Path Forwarding (uRPF) only forwards traffic received on the same interface the router uses to reach the source of
the traffic. This functionality was derived from the multicast RPF algorithm. A CPE-facing interface with rpf-check configured:
interfaces {
at-0/1/0 {
unit 0 {
vci 0.100;
family inet {
rpf-check;
address 192.168.10.1/24;
}
}
}
}
show interfaces detail at-0/1/0.0
RPF Failures: Packets: 123, Bytes: 4567
The uRPF algorithm only compares the IP SA to active routes, so this does not work well on core-facing interfaces. All routes can be
considered by configuring the feasible-paths parameter:
routing-options {
forwarding-table {
unicast-reverse-path feasible-paths;
}
}
show route extensive 192.168.10/24
unicast reverse-path: 123
[ge-0/0/1.0 so-0/1/0.0]
JUNOS performs the uRPF check before any configured input firewall filters.
42
IPv6
Interfaces
interfaces {
lo0 {
unit 0 {
family inet6 {
address 2001::1:1/128;
}
}
}
}
show ipv6 neighbors # IPv6 ND.
Static routes
routing-options {
rib inet6.0 {
static {
route 0::/0 next-hop 2001::100:1;
}
}
}
show route table inet6.0 protocol static
Tunneling
To tunnel IPv6 over IPv4, simply create a GRE tunnel with IPv4 source and destination addresses, put an IPv6 address on the tunnel
interface, and route traffic to the tunnel.
RIPng
protocols {
ripng {
group Internal {
export Send-My-RIPng-Routes; # Junos advertises no RIPng routes by default.
neighbor ge-0/0/2.0;
neighbor ge-0/0/3.0;
}
}
}
show route protocol ripng
policy-options {
policy-statement Send-My-RIPng-Routes {
term Direct-Routes {
from {
protocol direct; # Directly connected IPv6 prefixes.
family inet6;
}
then accept;
}
term RIPng-Routes {
from protocol ripng;
then accept;
}
}
}
OSPFv3
protocols {
ospf3 {
area 0.0.0.0 {
interface lo0.0;
interface ge-0/0/2.0;
}
}
}
show ospf3 (neighbor|route|database|interface <IFL>)
show route protocol ospf3
43
IS-IS
protocols {
isis {
interface ge-0/0/2.0; # An IPv6 IFL with family iso.
interface lo0.0;
}
}
show isis database extensive MyHost.00-00 # Show advertised IPv6 TLVs.
show isis (adjacency|...) [detail]
show route protocol isis
BGP
protocols {
bgp {
group External-Peers {
type external;
peer-as 65001;
neighbor 2001::100:1;
}
group IBGP-Peers {
type internal;
local-address 2001::1:1;
neighbor 2001::1:5;
}
}
}
show bgp summary
show route table inet6.0 [protocol (bgp|...)]
show route advertising-protocol bgp 2002::1
44
6PE
PE Routers
In 6PE, a “PE” router is defined as any provider router that receives unlabeled IPv6 packets and needs to tunnel them over MPLS.
The ipv6-tunneling setting is the key to enabling forwarding of such packets.
interfaces {
ge-1/0/0 { # CE-facing link.
unit 0 {
family inet6 {
address ::1.2.3.1/126;
}
}
}
xe-0/0/0 { # Core-facing link.
unit 0 {
family inet {
address 5.6.7.1/30;
}
family inet6; # No address. Enables routing of received IPv6 packets.
family mpls;
}
}
}
protocols {
mpls {
ipv6-tunneling; # Copies inet.3 routes to IPv4-compatible routes in inet6.3.
interface xe-0/0/0.0;
}
ldp {
interface xe-0/0/0.0;
}
bgp {
group CE1 {
type external;
family inet6 {
unicast;
}
export Send-V6;
peer-as 1234;
neighbor ::1.2.3.2;
local-address ::1.2.3.1;
}
group IBGP-Peers {
type internal;
family inet6 {
labeled-unicast {
explicit-null;
}
}
family inet {
unicast;
}
export [ Next-Hop-Self Send-V6 ];
neighbor 5.6.7.8; # And so on...
}
}
}
45
CE Routers
interfaces {
ge-1/0/0 { # PE-facing link.
unit 0 {
family inet6 {
address ::1.2.3.2/126;
}
}
}
}
protocols {
bgp {
group PE1 {
type external;
family inet6 {
unicast;
}
export Send-BGP6;
peer-as 5678;
neighbor ::1.2.3.1;
local-address ::1.2.3.2;
}
}
}
P (Core) Routers
Core routers do not need any IPv6 configuration unless they are route reflectors. RRs for 6PE need the inet6 labeled-unicast
address family. They also need the ability to resolve IPv4-compatible IPv6 next hops in inet6.3 in order to reflect 6PE routes, but a
simple static default route in inet6.3 can accomplish this:
routing-options {
rib inet6.3 {
static {
route ::/0 discard;
}
}
}
This static route only allows 6PE routes to be reflected, but forwarding of unlabeled IPv6 traffic requires a PE router configuration.
46
Multicast
RPF
Successful RPF checks have their results stored in inet.1. By default, the RPF algorithm queries the topology of inet.0.
show multicast rpf [<source-address>] [summary]
RIB groups
routing-options {
rib-groups {
Import-to-inet0-and-inet2 {
import-rib [ inet.0 inet.2 ]; # Import to both RIBs.
import-policy Import-a-Few-Routes; # Filter some routes from being imported into the RIBs.
}
Import-to-inet2-Only {
import-rib inet.2;
}
}
}
protocols {
ospf {
rib-group Import-to-inet0-and-inet2; # Import OSPF routes into this RIB group.
}
pim {
rib-group Import-to-inet2-Only; # This protocol references only inet.2 for RPF checks.
}
msdp {
rib-group Import-to-inet2-Only; # Ditto.
}
}
MX
bridge-domains FisherCo-Domain {
protocols {
igmp-snooping {
proxy;
interface ge-0/0/0.0 {
host-only-interface;
static {
group 227.0.10.1; # Static join.
}
}
interface ge-0/1/0.0 {
multicast-router-interface;
}
}
}
}
EX
protocols {
igmp-snooping {
vlan FisherCo-VLAN { # ...assuming that this VLAN has these interfaces in it.
proxy;
interface ge-0/0/0.0 {
host-only-interface;
static {
group 227.0.10.1; # Static join.
}
}
interface ge-0/1/0.0 {
multicast-router-interface;
}
}
}
}
47
IGMP
PIM-DM
48
PIM-SM
DR-to-RP register messages are encapsulated in unicast, so tunneling is needed. Enable tunneling on MX:
chassis {
fpc 0 {
pic 0 {
tunnel-services {
bandwidth 1g; # For a 40-port GE DPC. Use “10g” on a 4-port XE DPC.
}
}
}
}
Minimum setup: Enable PIM-SM on the appropriate interfaces and set up static RP, auto-RP, or BSR (see below). Add the source and
receiver interfaces to the IGP as passive interfaces for traffic to flow.
protocols {
pim {
import In-Good-Source-Groups; # Control incoming joins and prunes.
export Out-Good-Source-Groups;
rp {
dr-register-policy DR-Reg-Pol; # Control registers sent to the RP. On the RP, use rp-register-policy.
}
interface all {
mode sparse; # “sparse-dense” in auto-RP domains.
}
interface ge-0/0/1.0 {
neighbor-policy Allow-PIM-Neighbors;
}
interface fxp0.0 {
disable;
}
join-load-balance; # Multiple equal-cost paths can pass RPF checks. Secure?
join-prune-timeout 240;
override-interval 500; # This downstream router waits 500ms before sending an override join.
propagation-delay 1000; # This upstream router waits 1s to get an override join before pruning.
reset-tracking-bit; # Suppress joins on broadcast segments where a join has been received.
spt-threshold {
infinity [ RPT-Only-225 RPT-Only-226 ]; # SPT timer = infinity for accepted (S,G) pairs.
}
}
}
policy-options {
policy-statement Allow-PIM-Neighbors {
term 1 {
from {
route-filter 10.0.20.0/24 orlonger; # List acceptable neighbors.
}
then accept;
}
then reject; # Prevent other neighborships.
}
policy-statement In-Good-Source-Groups {
term 1-Allowed-Groups { # For (*,G) pairs.
from {
route-filter 227.7.0.0/16 orlonger;
}
then accept;
}
term 2-Allowed-Source-Group-Pairs { # For (S,G) pairs.
from {
route-filter 232.7.0.0/16 orlonger;
source-address-filter 10.0.20.2/32 exact;
}
then accept;
}
then reject; # No other (*,G) or (S,G) pairs are allowed.
}
policy-statement RPT-Only-225 {
term 1 {
from {
route-filter 225.1.2.3/32 exact;
source-address-filter 30.0.1.25/32 exact;
}
then accept; # Only affect (S,G) pairs matching the “from” statement.
}
term 2 { 49
then reject; # Other (S,G) pairs are unaffected.
}
}
}
clear pim join # Remember to do this after configuring an spt-threshold policy.
show pim rps [extensive]
show pim bootstrap
Static RP
Non-RP routers:
protocols {
pim {
rp {
static { # Replace “static” with “local” on the RP.
address 30.0.10.1;
}
}
}
}
Auto-RP
BSR
RP and BSR:
protocols {
pim {
rp {
bootstrap-import BSR-Import-Pol; # Control interfaces where BSR messages can be received.
bootstrap-export BSR-Export-Pol;
bootstrap-priority 150;
local {
address 30.0.10.1;
}
}
}
}
RP-to-group mappings flood between PIM neighbors using the bootstrap message, so RP configuration isn’t needed elsewhere.
50
MSDP
protocols {
msdp {
import MSDP-Import-Pol; # Control SA importation. May be set here or on a group or peer.
export MSDP-Export-Pol;
local-address 10.0.0.1;
group AS-1234 {
mode mesh-group;
peer 10.0.0.2;
peer 10.0.0.3 {
default-peer; # Accept all SAs.
}
}
active-source-limit { # Can also be set under a peer or “source <prefix>”.
maximum 1000;
threshold 750;
}
traceoptions {
file MSDP-Logs;
flag general detail;
}
}
}
policy-options {
policy-statement MSDP-Import-Pol {
term 10 {
from {
neighbor 10.0.0.2;
interface ge-0/0/5.0;
route-filter 224.7.6.5/32 exact;
source-address-filter 10.0.20.1 exact;
}
then reject;
}
then accept; # MSDP import policies require explicit acceptance.
}
}
show msdp [peer <peer-IP>] [detail]
show msdp (source-active|statistics)
show route table inet.4 # SA cache.
clear msdp cache
Anycast-RP
Anycast requires the lo0 secondary address plus a discovery mechanism (see MSDP and Anycast-PIM options below):
interfaces {
lo0 {
unit 0 {
family inet {
address 10.0.0.1 primary; # Unique. Set as router-id, too.
address 10.0.100.1; # Secondary. Use this anycast address on each RP.
}
}
}
}
MSDP option
protocols {
pim {
rp {
local {
address 10.0.100.1; # Anycast.
}
}
}
msdp {
group Anycast {
mode mesh-group;
local-address 10.0.0.1; # Local primary.
peer 10.0.0.2; # Peer’s primary.
}
}
} 51
Anycast-PIM option
protocols {
pim {
rp {
local {
family inet {
address 10.0.100.1; # Anycast.
anycast-pim {
rp-set {
address 10.0.0.2; # Peer’s primary.
}
local-address 10.0.0.1; # Local primary.
}
}
}
}
}
}
SSM
Addressing options:
routing-options {
multicast {
ssm-groups {
227.7.0.0/16; # Adds this as an SSM-only range.
}
asm-override-ssm; # Allows mixed ASM-SSM operations in SSM ranges.
}
}
PIM: Set up PIM-SM on each router. If only SSM is needed, then enable sparse mode on all applicable up- and downstream
interfaces. To enable ASM and SSM, enable the PIM interfaces and configure an RP discovery mechanism.
IGMP:
protocols {
igmp {
interface ge-7/0/0.0 {
version 3;
}
interface ge-9/0/0.0 {
ssm-map Map-227.7.7.1 { # This client doesn’t support IGMPv3, so map sources manually.
policy SSM-Match-227.7.7.1; # Map IGMPv1 or v2 (*,G) joins for this group...
source 10.0.75.1; # ...to this source.
}
}
}
}
policy-options {
policy-statement SSM-Match-227.7.7.1 {
term 1 {
from {
route-filter 227.7.7.1/32 exact;
}
then accept;
}
}
}
Scoping
52
Scoping policy provides granular boundary control:
routing-options {
multicast {
scope-policy FisherCo-MCast-Boundary; # Create a boundary for imported or exported multicast data.
}
}
policy-options {
policy-statement FisherCo-MCast-Boundary {
term 10 {
from {
interface ge-7/0/0.0; # Put boundary interfaces here.
route-filter 239.0.0.0/10 orlonger; # Multicast traffic to filter upon import or export.
}
then reject;
}
}
}
show multicast scope
53
Service Provider Switching
Access ports
Trunks
interfaces {
ge-0/1/1 {
native-vlan-id 200;
vlan-tagging;
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list [ 100-101 ];
}
}
}
}
The “native-vlan-id” command means received untagged frames go in vlan 200, and transmitted vlan 200 frames aren’t tagged.
bridge-domains {
VLAN_100 {
vlan-id 100;
}
VLAN_101 {
vlan-id 101;
}
}
Bridge-domain lists
bridge-domains {
FisherCo {
vlan-id-list [ 100-110 ]; # Creates domains named “FisherCo-vlan-0100”, etc.
}
}
show bridge domain [<name>] [detail]
show bridge statistics
IRB interfaces
Enables the router to route packets received on a switchport when the router’s MAC is the destination.
interfaces {
ge-0/0/3 {
unit 0 {
family bridge {
interface-mode access;
vlan-id 100;
}
}
}
irb {
unit 1 {
family inet {
address 192.168.100.1/24; # GW IP for devices on vlan 100.
}
}
}
}
bridge-domains {
VLAN_100 {
vlan-id 100;
routing-interface irb.1;
}
}
MAC-learning throttles
Global:
protocols {
l2-learning {
global-mac-limit 100000; # 393k is default.
global-mac-statistics; # Off by default.
global-mac-table-aging-time 600; # 300s is default. Only globally configurable.
global-no-mac-learning; # Don’t learn MACs dynamically.
}
}
Per switch:
switch-options {
interface-mac-limit 2048; # 1k is default.
mac-statistics;
mac-table-size 10000; # 5k is default.
no-mac-learning;
}
55
Per bridge domain:
bridge-domains {
FisherCo-VLAN {
bridge-options {
interface-mac-limit 2048;
mac-statistics;
mac-table-size {
10000; # 5k is default. When MAC table is full, flood frames.
packet-option drop; # When MAC table is full, drop frames to unknown MACs.
}
no-mac-learning;
}
}
}
Per interface:
bridge-domains {
FisherCo-VLAN {
bridge-options {
interface ge-0/0/1.100 {
interface-mac-limit 2048;
no-mac-learning;
static-mac ab:cd:ef:12:34:56;
}
}
}
}
show l2-learning [global-information] [global-mac-count] [interface <int>]
firewall {
family bridge { # MX-only, for now.
filter FisherCo-Secure {
term Deny-Bad-Guys {
from {
source-mac-address 12:34:56:ab:cd:ef; # Many possible match conditions exist.
}
then {
count;
discard;
}
}
term Allow-Others {
then accept;
}
}
}
}
An implicit discard-all term exists.
Apply these layer-two filters to an interface just as a normal layer-three input or output filter. Use “input-list” or “output-list” to
apply multiple filters. Apply one layer-two input filter, or no output filters, to a bridge domain:
bridge-domains {
FisherCo-VLAN {
forwarding-options {
filter {
input FisherCo-Secure;
}
}
}
}
If using “vlan-id-list” to create a domain, then a bridge-domain filter cannot be configured. If bridge-domain and interface input
filters are configured, then the bridge-domain filter’s code is appended to the end of the interface filter, creating one logical filter.
56
LFM
CFM
Customer bridge:
protocols {
oam {
ethernet {
connectivity-fault-management {
action-profile EVC-CFM-OAM-Down {
event {
adjacency-loss; # If the CFM OAM neighborship goes down...
}
action {
interface-down; # ...then shut down the interface.
}
}
maintenance-domain FisherCo-Domain { # Must match peer.
level 5; # Required. Must match peer.
maintenance-association FisherCo-EVC-100 { # Required. Must match peer.
continuity-check { # Interval must match peer.
interval 100ms; # Required? CC keepalives. Default is 1m.
}
mep 101 {
interface ge-0/0/1.100 vlan 100;
direction down; # Required.
auto-discovery; # This or remote-mep is required.
remote-mep 105 { # The peer customer bridge’s MEP.
action-profile EVC-CFM-OAM-Down; # Can’t apply if auto-discovery is enabled.
}
}
}
}
}
}
}
}
57
Provider bridge:
protocols {
oam {
ethernet {
connectivity-fault-management {
maintenance-domain ISP-Domain {
level 4;
maintenance-association ISP-EVC-100 {
continuity-check {
interval 100ms;
}
mip-half-function default; # Acts as a MIP for level 5 only.
mep 102 {
interface ge-0/1/2.100 vlan 100;
direction up;
auto-discovery;
}
}
}
}
}
}
}
show oam ethernet connectivity-fault-management (delay-statistics|forwarding-state|mep-database|mep-statistics)
show oam ethernet connectivity-fault-management (mip|path-database|policer)
show oam ethernet connectivity-fault-management interface <int> vlan <vlan-id> [extensive]
ping ethernet maintenance-domain <domain> maintenance-association <assoc> (<mac>|mep <mep-id>)
traceroute ethernet maintenance-domain <domain> maintenance-association <assoc> mep <mep-id>
monitor ethernet delay-measurement maintenance-domain <dom> maintenance-association <assoc> mep <mep-id> two-way
show oam ethernet connectivity-fault-management mep-statistics maintenance-domain <dom> maintenance-association…
S-VLANs
interfaces {
ge-0/0/2 {
flexible-vlan-tagging;
unit 200 {
vlan-id 200; # Outer tag.
family bridge {
interface-mode trunk;
inner-vlan-id-list 100-110; # Inner tags.
}
}
}
}
Old style:
interfaces {
ge-0/0/2 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-tags outer 200 inner-range 100-110;
}
}
}
On each port:
interfaces {
ge-0/0/1 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 200; # Outer tags to allow.
}
}
}
}
58
bridge-domains {
FisherCo-VLAN {
vlan-id 200; # Configure identically throughout the PBN.
}
}
The bridge-domain should only contain the S-TAG VLAN. Configure the PNP just as an S-VLAN bridge interface.
CEP:
interfaces {
ge-0/0/1 {
unit 0 {
family bridge {
interface-mode access; # Receive all frames...
vlan-id 200; # ...and push the outer tag onto them.
}
}
}
}
Old style: This required placing each customer in a different virtual switch due to the “vlan-id all” command.
bridge-domains {
FisherCo-domain {
vlan-id all;
interface ge-0/0/0.0;
interface ge-1/0/0.0;
}
}
CEP:
interfaces {
ge-0/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-id-range 1-4094;
}
}
}
PNP:
interfaces {
ge-1/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-tags outer 200 inner-range 1-4094;
}
}
}
Using vlan-id-list
Each virtual switch can have only one “vlan-id-list,” so that domains do not overlap.
bridge-domains {
FisherCo-Domain {
vlan-id-list 100-110; # Creates a domain per C-VLAN.
}
}
CEP:
interfaces {
ge-0/0/0 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 100-110;
}
}
}
}
59
PNP:
interfaces {
ge-1/0/0 {
flexible-vlan-tagging;
unit 0 {
vlan-id 200;
family bridge {
interface-mode trunk;
inner-vlan-id-list 100-110;
}
}
}
}
The “vlan-id all” command necessitates a separate virtual switch per customer.
bridge-domains {
FisherCo-Domain {
vlan-id all;
interface ge-0/0/0.0;
interface ge-1/0/0.0;
}
}
CEP:
interfaces {
ge-0/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-id-range 100-110;
}
}
}
PNP:
interfaces {
ge-1/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-tags outer 200 inner-range 100-110;
}
}
}
This method uses SVL. The “vlan-id none” command triggers PNP normalization. (?)
bridge-domains {
FisherCo-Domain {
vlan-id none; # Pop C-VLAN before MAC lookup.
interface ge-0/0/0.100; # Do for each CEP IFL.
interface ge-1/0/0.0;
}
}
CEP:
interfaces {
ge-0/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 100 { # Create an IFL per C-VLAN.
encapsulation vlan-bridge;
vlan-id 100;
}
...and so on.
}
}
60
PNP:
interfaces {
ge-1/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 0 {
encapsulation vlan-bridge;
vlan-tags outer 200 inner 100; # Normalizes untagged frames.
}
}
}
VLAN maps
CEP:
interfaces {
ge-0/0/0 {
unit 100 {
input-vlan-map {
push;
vlan-id 200; # Push S-TAG 200 on received frames.
}
output-vlan-map pop; # Pop S-TAGs upon transmission.
}
}
}
show interfaces ge-0/0/0.100
PB NNI
The “translate” command is bidirectional, swapping S-TAGs on all incoming and outgoing frames.
interfaces {
ge-2/0/0 {
unit 0 {
family bridge {
vlan-rewrite {
translate 300 200; # translate <peer S-TAG> <local S-TAG>
}
}
}
}
}
show interfaces ge-2/0/0.0
A domain should exist for the local S-TAG’s VLAN.
E-Line EVC
Configure CBP, PIP, CEP, and PNP interfaces, as well as B- and I-Component virtual switches, on each BEB.
BEB PNP:
interfaces {
ge-1/0/0 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 2000; # PBBN B-TAG
}
}
}
}
62
Configure interfaces and routing instances on each BCB. Only B-component virtual switches are needed on BCBs, and their
configuration is identical to that of a BEB, with all PNP interfaces contained in them.
PNP:
interfaces {
ge-2/0/0 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 2000;
}
}
}
}
ERP
Ring owner:
protocols {
protection-group {
ethernet-ring Bobs-ISP-Ring {
ring-protection-link-owner;
east-interface {
control-channel {
ge-2/0/0.0;
vlan 200; # Add dedicated VLAN to interfaces’ vlan-id-lists on all nodes.
}
ring-protection-link-end; # East interface is the RPL.
}
west-interface {
control-channel {
ge-3/0/0.0;
vlan 200;
}
}
}
}
}
show protection-group ethernet-ring [aps|interface|node-state|statistics] [detail]
clear protection-group ethernet-ring statistics group-name <group>
For a normal node, the configuration is identical, but remove “ring-protection-link-owner” and “ring-protection-link-end”.
STP / RSTP
63
MSTP
protocols {
mstp {
configuration-name FisherCo-Region; # Must match throughout region.
revision-level 1; # Ditto.
interface ge-0/0/0;
msti 1 { # 1..64. All VLANs are in MSTI 0 by default.
bridge-priority 8k;
vlan 100-110; # Maps VLANs to the instance.
}
}
}
show spanning-tree mstp configuration
VSTP
protocols {
vstp {
interface ge-0/0/0; # Must also be assigned to a VLAN.
vlan 100 {
bridge-priority 8k;
interface ge-0/0/0;
}
}
}
Virtual switches
routing-instance FisherCo-VS {
instance-type virtual-switch;
interface ge-0/0/0.0;
bridge-domains {
VLAN_100 {
vlan-id 100;
routing-interface irb.1;
}
}
}
show bridge-domain
64
Automation
Op scripts
Every script requires the .slax file extension. Install op scripts in /var/db/scripts/op.
system {
scripts {
op {
file {
Clock.slax; # Enables the script.
}
}
}
}
Execute from operational mode: op [script] # Omit “.slax”. Example: op clock
View op script parameters: op [script] ?
Commit scripts
Event scripts
Generated event:
event-options {
generated-event {
Every-Three-Hours time-interval 10800; # Trigger every 10800 seconds.
}
policy Archive-Log-Files {
events Every-Three-Hours; # When “every-three-hours” is triggered...
then {
event-script Archive-Logs.slax; # ...execute this script.
}
}
event-scripts {
file {
Archive-Logs.slax;
}
}
}
65
Archival event with no SLAX script: (untested)
event-options {
policy Button-Press-Event {
events chassisd_fru_offline_notice;
attributes-match {
chassisd_fru_offline_notice.reason matches "Offlined by button press";
}
then {
execute-commands {
commands {
"show chassis craft-interface";
"request chassis fpc online slot {$$.slot}"; # Retrieves .slot attribute from the log message.
"set chassis display message Stop_pushing_buttons!";
}
output-filename Button-Press-Log; # Filename: Button-Press-Log-YYYYMMDD-HHMMSS-index
destination VarTmp; # All command output goes to VarTmp defined below.
output-format text;
}
}
}
destinations {
VarTmp {
archive-sites {
/var/tmp;
}
}
}
}
66
Sources
Advanced Junos Service Provider Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2011.
Call, Curtis. This Week: Applying Junos Automation. Sunnyvale, CA: Juniper Networks, 2011.
Joseph M. Soricelli and others. JNCIA - Juniper Networks Certified Internet Associate Study Guide. Sunnyvale, CA: Juniper Networks, 2006.
Juniper Networks Advanced Policy Student Guide, Revision 6.b. Sunnyvale, CA: Juniper Networks, 2004.
Junos Class of Service Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2010.
"JUNOS Documentation." Technical Documentation. Juniper Networks, n.d. Web. 16 October, 2012.
<www.juniper.net/techpubs/software/junos/index.html>.
Junos Intermediate Routing Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2010.
Junos MPLS and VPNs Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2011.
Junos Multicast Routing Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2011.
Junos Operating System Fundamentals Student Guide, Revision 10.b. Sunnyvale, CA: Juniper Networks, 2010.
Junos Routing Essentials Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2010.
Junos Service Provider Switching Student Guide, Revision 10.a. Sunnyvale, CA: Juniper Networks, 2010.
Soricelli, Joseph M.. JNCIS - Juniper Networks Certified Internet Specialist Study Guide. Sunnyvale, CA: Juniper Networks, 2006.
"Technology Overview - Using 4-Byte Autonomous System Numbers in BGP Networks." Juniper.net. Juniper Networks, 13 June 2012. Web. 20
Sept. 2012. <www.juniper.net/techpubs/en_US/junos/information-products/topic-collections/nce/4-byte-as-numbers/4_byte_as_numbers.pdf>.
67