litchralee

joined 1 year ago
[–] litchralee@sh.itjust.works 2 points 1 month ago

My recommendation is to start with getting fax to work locally. As in, from port 1 of a single SPA2102 to port 2 of the same. This would validate that your fax machines and the SPA2102 is operational, and is just entertaining in its own right to have a dialtone that "calls" the other port.

Fortunately, Gravis from the Cathode Ray Dude YouTube channel has a writeup to do exactly that, and I've personally followed these steps on an SPA122 with success, although I was doing a silly phone project, not fax project. https://gekk.info/articles/ata-config.html

If you're lucky, perhaps fax will Just Work because your machines are very permissive with the signals they receive and can negotiate. If not, you might have to adjust the "fax optimizations" discussed here: https://gekk.info/articles/ata-dialup.html

And then once local faxing works, you can then try connecting two VoIP devices together over the network. This can be as simple as direct SIP using IP and port number, or can involve setting up a PBX that both devices register against.

[–] litchralee@sh.itjust.works 1 points 2 months ago (1 children)

Good luck with your endeavors! Always keep in mind that when debugging a complex problem, try isolating individual components and testing them individually. This can be as easy as swapping a web application with the Python SimpleHTTPServer to validate firewall and reverse proxy configuration.

[–] litchralee@sh.itjust.works 1 points 2 months ago (3 children)

Thank you for that detailed description. I see two things which are of concern: the first is the IPv6 network unreachable. The second is the lost IPv4 connection, as opposed to a rejection.

So staring in order, the machine on the external network that you're running curl on, does it have a working IPv6 stack? As in, if you opened a web browser to https://test-ipv6.com/ , does it pass all or most tests? An immediate "network is unreachable" suggests that external machine doesn't have IPv6 connectivity, which doesn't help debug what's going on with the services.

Also, you said that all services that aren't on port 80 or 443 are working when viewed externally, but do you know if that was with IPv4 or IPv6? I use a browser extension called IPvFoo to display which protocol the page has loaded with, available for Chrome and Firefox. I would check that your services are working over IPv6 equally well as IPv4.

Now for the second issue. Since you said all services except those on port 80, 443 are reachable externally, that would mean the IP address -- v4 or v6, whichever one worked -- is reachable but specifically ports 80 and 443 did not.

On a local network, the norm (for properly administered networks) is for OS firewalls to REJECT unwanted traffic -- I'm using all-caps simply because that's what I learned from Linux IP tables. A REJECT means that the packet was discarded by the firewall, and then an ICMP notification is sent back to the original sender, indicating that the firewall didn't want it and the sender can stop waiting for a reply.

For WANs, though, the norm is for an external-facing firewall to DROP unwanted traffic. The distinction is that DROPping is silent, whereas REJECT sends the notification. For port forwarding to work, both the firewall on your router and the firewall on your server must permit ports 80 and 443 through. It is a very rare network that blocks outbound ICMP messages from a LAN device to the Internet.

With all that said, I'm led to believe that your router's firewall is not honoring your port-forward setting. Because if it did and your server's firewall discarded the packet, it probably would have been a REJECT, not a silent drop. But curl showed your connection timed out, which usually means no notifications was received.

This is merely circumstantial, since there are some OS's that will DROP even on the LAN, based on misguided and improper threat modeling. But you will want to focus on the router's firewall, as one thing routers often do is intercept ports 80 and 443 for the router's own web UI. Thus, you have to make sure there aren't such hidden rules that preempt the port-forwarding table.

[–] litchralee@sh.itjust.works 1 points 2 months ago (5 children)

I'm still trying to understand exactly what you do have working. You have other services exposed by port numbers, and they're accessible in the form .ducksns.org: with no problems there. And then you have Jellyfin, which you're able to access at home using https://jellyfin..duckdns.org without problems.

But the moment you try accessing that same URL from an external network, it doesn't work. Even if you use HTTP with no S, it still doesn't connect. Do I understand that correctly?

[–] litchralee@sh.itjust.works 5 points 2 months ago (7 children)

it works fine (but no https)

This would suggest port 443 is not being exposed externally. You might try using a CLI tool like "curl" which is fairly verbose about how it is connecting to a given URL, as part of trying to download the link. If given an HTTPS URL that doesn't work, the output should help point at the issue.

[–] litchralee@sh.itjust.works 1 points 2 months ago* (last edited 2 months ago)

My last post didn't substantially address smaller ISPs, and from your description, it does sound like your ISP might be a smaller operator. But essentially, on the backend, a smaller ISP won't have the customer base to balance their traffic in both directions. But they still need to provision for peak traffic demand, and as you observed, that could mean leaving capacity on the table, err fibre. This is correct from a technical perspective.

But now we touch up on the business side of things again. The hypothetical small ISP -- which I'll call the Retail ISP, since they are the face that works with end-user residential customers -- will usually contract with one of more regional ISPs in the area for IP transit. That is, upstream connectivity to the broader Internet.

It would indeed be wasteful and expensive to obtain an upstream connection that guarantees 40 Gbps symmetric at all times. So they don't. Instead, the Retail ISP would pursue a bustable billing contract, where they commit to specific, continual, averaged traffic rates in each direction, but have some flexibility to use more or less than that commited value.

So even if the Retail ISP is guaranteeing each end-user at least 40 Gbps download, the Retail ISP must write up a deal with the Upstream ISP based on averages. And with, say, 1000 customers, the law of averages will hold true. So let's say the average rates are actually 20 Gbps down/1 Gbps up.

To be statistically rigorous though, I should mention that traffic estimation is a science, with applicability to everything from data network and road traffic planning, queuing for the bar at a music venue, and managing electric grid stability. Looking at historical data to determine a weighed average would be somewhat straightforward, but compensating for variables so that it can become future-predictive is the stuff of statisticians with post-nominative degrees.

What I can say though, from what I remember in calculus at uni, is that if each end-user's traffic rates are independent from other end-users (a proposition that is usually true but not necessarily at all times of day), then the Central Limit Theorem states that the distribution of the aggregate set of end-users will approximate a normal distribution (aka Gaussian, or bell curve), getting closer for more users. This was a staggering result when I first learned it, because it really doesn't matter what each user is doing, it all becomes a bell curve in the end.

The Retail ISP's contract with the Upstream ISP probably has two parts: a circuit, and transit. The circuit is the physical line, and for the given traffic, probably a 50 Gbps fibre connection might be provisioned for lots of burstable bandwidth. But if the Retail ISP is somewhat remote, perhaps a microwave RF link could be set up, or leased from a third-party. But we'll stick with fibre, as that's going to be symmetrical.

As a brief aside, even though a 40 Gbps circuit would also be sufficient, sometimes the Upstream ISP's nearby equipment doesn't support certain speeds. If the circuit is Ethernet based, then a 40 Gbps QSFP+ circuit is internally four 10 Gbps links bundles together on the same fibre line. But supposing the Upstream ISP normally sells 200 Gbps circuits, then 50 Gbps to the Retail ISP makes more sense, as a 200 Gbps QSFP56 circuit is internally made from four 50 Gbps, which oftentimes can be broken out. The Upstream and Retail ISPs need to agree on the technical specs for the circuit, but it certainly must provide overhead beyond the averages agreed upon.

And those averages are captured in the transit contract, where brief exceedances/underages are not penalized but prolonged conditions would be subject to fees or even result in new contract negotiations. The "waste" of circuit capacity (especially upload) is something both the Retail ISP (who saves money, since guaranteed 50 Gbps would cost much more) and the Upstream ISP willingly accept.

Why? Because the Upstream ISP is also trying to balance the traffic to their upstream, to avoid fees for imbalance. So even though the Retail ISP can't guarantee symmetric traffic to the Upstream ISP, what the Retail ISP can offer is predictability.

If the Upstream ISP can group the Retail ISP's traffic with a nearby data center, then that could roughly balance out, and allow them to pursue better terms with the subsequent higher tier of upstream provider.

Now we can finally circle back on why the Retail ISP would decline to offer end-users some faster upload speeds. Simply put, the Retail ISP may be aware that even if they offer higher upload, most residential customers won't really take advantage of it, even if it was a free upgrade. This is the reality of residential Internet traffic. Indeed, the unique ISPs in the USA offering residential 10 Gbps connections have to be thoroughly aware that even the most dedicated of, err, Linux ISO afficionados cannot saturate that connection for more than a few hours per month.

But if most won't take advantage of it, then that shouldn't impact the Retail ISP's burstable contract with the Upstream ISP, and so it's a free choice, right? Well, yes, but it's not the only consideration. The thing about offering more upload is that while most customers won't use it, a small handful will. And maybe those customers are the type that will complain loudly if the faster upload isn't honored. And that might hurt Retail ISP's reputation. So rather than take that gamble through guaranteeing faster upload for residential connections, they'd prefer to just make it "best effort", whatever that means.

EDIT: The description above sounds a bit defeatist for people who just want faster upload, since it seems that ISPs just want to do the bare minimum and not cater to users who are self-hosting, whom ISPs believe to be a minority. So I wanted to briefly -- and I'm aware that I'm long winded -- describe what it would take to change that assumption.

Essentially, existing "average joe" users would have to start uploading a lot more than they are now. With so-called cloud services, it might seem that upload should go up, if everyone's photos are stored on remote servers. But cloud services also power major sites like Netflix, which are larger download sources. So net-net, I would guess that the residential customer's download-to-upload ratio is growing wider, and isn't shrinking.

It would take a monumental change in networking or computer or consumer demand to reverse this tide. Example: a world where data sovereignty -- bonafide ownership of your own data -- is so paramount that everyone and their mother has a social-media server at home that mutually relays and amplifies viral content. That is to say, self-hosting and upload amplification.

[–] litchralee@sh.itjust.works 7 points 2 months ago* (last edited 2 months ago) (3 children)

Historically, last-mile technologies like dial-up, DSL, satellite, and DOCSIS/cable had limitations on their uplink power. That is, the amount of energy they can use to send upload through the medium.

Dial-up and DSL had to comply with rules on telephone equipment, which I believe limited end-user equipment to less power than what the phone company can put onto the wires, premised on the phone company being better positioned to identify and manage interference between different phone lines. Generally, using reduced power reduces signal-to-noise ratio, which means less theoretical and practical bandwidth available for the upstream direction.

Cable has a similar restriction, because cable plants could not permit end-user "back feeding" of the cable system. To make cable modems work, some amount of power must be allowed to travel upstream, but too much would potentially cause interference to other customers. Hence, regulatory restrictions on upstream power. This also matched actual customer usage patterns at the time.

Satellite is more straightforward: satellite dishes on earth are kinda tiny compared to the bus-sized satellite's antennae. So sending RF up to space is just harder than receiving it.

Whereas fibre has a huge amount of bandwidth, to the point that when new PON standards are written, they don't even bother reusing the old standard's allocated wavelength, but define new wavelengths. That way, both old and new services can operate on the fibre during the switchover period. So fibre by-default allocates symmetrical bandwidth, although some PON systems might still be closer to cable's asymmetry.

But there's also the backend side of things: if a major ISP only served residential customers, who predominantly have asymmetric traffic patterns, then they will likely have to pay money to peer with other ISPs, because of the disparity. Major ISPs solve this by offering services to data centers, which generally are asymmetric but tilted towards upload. By balancing residential with server customers, the ISP can obtain cheaper or even free peering with other ISPs, because symmetrical traffic would benefit both and improve the network.

[–] litchralee@sh.itjust.works 1 points 3 months ago* (last edited 3 months ago)

Looking at the diagram, I don't see any issue with the network topology. And the power arrangement also shouldn't be a problem, unless you require the camera/DVR setup to persist during a power cut.

In that scenario, you would have to provide UPS power to all of: the PoE switch, the L3 switch, and the NVR. But if you don't have such a requirement, then I don't see a problem here.

Also, I hope you're doing well now.

[–] litchralee@sh.itjust.works 1 points 3 months ago (2 children)

Would you be able to draw an ASCII art picture of what you're asking? I'm having a hard time understanding whether the L3 switch is in the same networking closet alongside the PoE switch, and if the NVR is also right next to the L3 switch. Also, is the NVR powered with PoE? Or it needs it's own AC power?

[–] litchralee@sh.itjust.works 3 points 4 months ago* (last edited 4 months ago)

Regarding tcpdump, you can use it directly from the command line to print out packets, or you can have it record traffic to a file. That file can later be opened in Wireshark for analysis, with filters, color, ~~blackjack and hookers~~, and protocol decoding.

Things like TCP retries or MTU issues/fragmentation will be very apparent when presented in Wireshark.

[–] litchralee@sh.itjust.works 27 points 4 months ago (1 children)

The original reporting by 404media is excellent in that it covers the background context, links to the actual PDF of the lawsuit, and reaches out to an outside expert to verify information presented in the lawsuit and learned from their research. It's a worthwhile read, although it's behind a paywall; archive.ph may be effective though.

For folks that just want to see the lawsuit and its probably-dodgy claims, the most recent First Amended Complaint is available through RECAP here, along with most of the other legal documents in the case. As for how RECAP can store copies of these documents, see this FAQ and consider donating to their cause.

Basically, AXS complains about nine things, generally around: copyright infringement, DMCA violations (ie hacking/reverse engineering), trademark counterfeiting and infringement, various unfair competition statutes, civil conspiracy, and breach of contract (re: terms of service).

I find the civil conspiracy claim to be a bit weird, since it would require proof that the various other ticket websites actually made contact with each other and agreed to do the other eight things that AXS is complaining about. Why would those other websites -- who are mutual competitors -- do that? Of course, this is just the complaint, so it's whatever AXS wants to claim under "information and belief", aka it's what they think happened, not necessarily with proof yet.

[–] litchralee@sh.itjust.works 4 points 4 months ago* (last edited 4 months ago)

Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other's endianness, but this was deemed to be even more confusing haha

view more: next ›