saint

joined 3 years ago
MODERATOR OF
 

Alice Evans is diving into a new Econ paper.

Ingrid Haegele finds that junior men are more likely to apply for promotions, primarily due to a greater desire for team leadership.

 

OG

 

Got some time to read the article: I am sure, that India is not an exception in leaking and being in deep shit in regards to storing sensitive data.

Seems that we should assume that we cannot prevent data leaks. So the question is - how can we deal with the aftermath?

A Leak of Biometric Police Data Is a Sign of Things to Come

Highlights

Thousands of law enforcement officials and people applying to be police officers in India have had their personal information leaked online—including fingerprints, facial scan images, signatures, and details of tattoos and scars on their bodies.

While the misconfigured server has now been closed off, the incident highlights the risks of companies collecting and storing biometric data, such as fingerprints and facial images, and how they could be misused if the data is accidentally leaked.

“A lot of data is collected in India, but nobody's really bothered about how to store it properly,” Narayan says. Data breaches are happening so regularly that people have “lost that surprise shock factor,”

So many other countries are looking at biometric verification for identities, and all of that information has to be stored somewhere,” Fowler says. “If you farm it out to a third-party company, or a private company, you lose control of that data. When a data breach happens, you’re in deep shit, for lack of a better term.

 

Get to know some books by Vernor Vinge

 

When Regulation Encourages ISPs to Hack Their Customers

Highlights

KT, formerly Korea Telecom, has been accused of deliberately infecting 600,000 of its own customers with malware to reduce peer-to-peer file sharing traffic. This is a bizarre hack and a great case study of how government regulation has distorted the South Korean internet.

South Korean media outlet JTBC reported last month that KT had infected customers who were using Korean cloud data storage services known as 'webhards' (web hard drives). The malware disabled the webhard software, resulted in files disappearing and sometimes caused computers to crash.

JTBC news says the team involved "consisted of a 'malware development' section, a 'distribution and operation' section, and a 'wiretapping' section that looked at data sent and received by KT users in real time".

The company‬ ‭claims that the people involved in the webhard hack were a small group operating independently. It's just an amazing coincidence that they just happened to invest so much time and effort into a caper that aligned so well with KT's financial interests!‬‭

South Korea has a 'sender pays' model in which ISPs must pay for traffic they send to other ISPs, breaking the worldwide norm of 'settlement-free peering', voluntary arrangements whereby ISPs exchange traffic without cost.

Once the sender pays rules were enforced, however, KT was left with large bills from its peer ISPs for the Facebook traffic sent from the cache in its network. KT tried to recoup costs from Facebook, but negotiations broke down and Facebook disabled the cache. South Korean users were instead routed over relatively expensive links to overseas caches with increased latency.

These sender pays rules may also encourage peer-to-peer file sharing relative to more centralised pirate content operations.

An unnamed sales manager from a webhard company told TorrentFreak torrent transfers saved them significant bandwidth costs, but as long as traffic flows between ISPs, someone will pay. KT is South Korea's largest broadband provider, so since it has more customers, peer-to-peer file sharing means that the company has to pay fees to its competitor ISPs.

Either way, this is just a great example of where unusual regulation can produce unusual results.

fun

 

remote and interesting write-up

 

Pluralistic: The reason you can't buy a car is the same reason that your health insurer let hackers dox you (28 Jun 2024)

Metadata

Highlights

Equifax knew the breach was coming. It wasn't just that their top execs liquidated their stock in Equifax before the announcement of the breach – it was also that they ignored years of increasingly urgent warnings from IT staff about the problems with their server security.

Just like with Equifax, the 737 Max disasters tipped Boeing into a string of increasingly grim catastrophes.

Equifax isn't just a company: it's infrastructure.

This witch-hunts-as-a-service morphed into an official part of the economy, the backbone of the credit industry, with a license to secretly destroy your life with haphazardly assembled "facts" about your life that you had the most minimal, grudging right to appeal (or even see).

There's a direct line from that acquisition spree to the Equifax breach(es). First of all, companies like Equifax were early adopters of technology. They're a database company, so they were the crash-test dummies for ever generation of database.

There's a reason libraries, cities, insurance companies, and other giant institutions keep getting breached: they started accumulating tech debt before anyone else, so they've got more asbestos in the walls, more sagging joists, more foundation cracks and more termites.

The reason to merge with your competitors is to create a monopoly position, and the value of a monopoly position is that it makes a company too big to fail, which makes it too big to jail, which makes it too big to care.

The biggest difference was that Boeing once had a useful, high-quality product, whereas Equifax started off as an irredeemably terrible, if efficient, discrimination machine, and grew to become an equally terrible, but also ferociously incompetent, enterprise.

Every corporate behemoth is locked in a race between the eventual discovery of its irreparable structural defects and its ability to become so enmeshed in our lives that we have to assume the costs of fixing those defects. It's a contest between "too rotten to stand" and "too big to care."

Remember how we discovered this? Change was hacked, went down, ransomed, and no one could fill a scrip in America for more than a week, until they paid the hackers $22m in Bitcoin?

Well, first Unitedhealthcare became the largest health insurer in America by buying all its competitors in a series of mergers that comatose antitrust regulators failed to block. Then it combined all those other companies' IT systems into a cosmic-scale dog's breakfast that barely ran. Then it bought Change and used its monopoly power to ensure that every Rx ran through Change's servers, which were part of that asbestos-filled, termite-infested, crack-foundationed, sag-joisted teardown. Then, it got hacked.

Good luck with that. There's a company you've never heard. It's called CDK Global. They provide "dealer management software." They are a monopolist. They got that way after being bought by a private equity fund called Brookfield. You can't complete a car purchase without their systems, and their systems have been hacked.

What happens next is a near-certainty: CDK will pay a multimillion dollar ransom, and the hackers will reward them by breaching the personal details of everyone who's ever bought a car, and the slaves in Cambodian pig-butchering compounds will get a fresh supply of kompromat.

But on the plus side, the need to pay these huge ransoms is key to ensuring liquidity in the cryptocurrency markets, because ransoms are now the only nondiscretionary liability that can only be settled in crypto

;)

 

How We Built the Internet

Metadata

Highlights

The internet is a universe of its own.

The infrastructure that makes this scale possible is similarly astounding—a massive, global web of physical hardware, consisting of more than 5 billion kilometers of fiber-optic cable, more than 574 active and planned submarine cables that span a over 1 million kilometers in length, and a constellation of more than 5,400 satellites offering connectivity from low earth orbit (LEO).

“The Internet is no longer tracking the population of humans and the level of human use. The growth of the Internet is no longer bounded by human population growth, nor the number of hours in the day when humans are awake,” writes Geoff Huston, chief scientist at the nonprofit Asia Pacific Network Information Center.

As Shannon studied the structures of messages and language systems, he realized that there was a mathematical structure that underlied information. This meant that information could, in fact, be quantified.

Shannon noted that all information traveling from a sender to a recipient must pass through a channel, whether that channel be a wire or the atmosphere.

Shannon’s transformative insight was that every channel has a threshold—a maximum amount of information that can be delivered reliably to a sender.

Kleinrock approached AT&T and asked if the company would be interested in implementing such a system. AT&T rejected his proposal—most demand was still in analog communications. Instead, they told him to use the regular phone lines to send his digital communications—but that made no economic sense.

What was exceedingly clever about this suite of protocols was its generality. TCP and IP did not care which carrier technology transmitted its packets, whether it be copper wire, fiber-optic cable, or radio. And they imposed no constraints on what the bits could be formatted into—video text, simple messages, or even web pages formatted in a browser.

David Clark, one of the architects of the original internet, wrote in 1978 that “we should … prepare for the day when there are more than 256 networks in the Internet.”

Fiber was initially laid down by telecom companies offering high-quality cable television service to homes. The same lines would be used to provide internet access to these households. However, these service speeds were so fast that a whole new category of behavior became possible online. Information moved fast enough to make applications like video calling or video streaming a reality.

And while it may have been the government and small research groups that kickstarted the birth of the internet, its evolution henceforth was dictated by market forces, including service providers that offered cheaper-than-ever communication channels and users that primarily wanted to use those channels for entertainment.

In 2022, video streaming comprised nearly 58 percent of all Internet traffic. Netflix and YouTube alone accounted for 15 and 11 percent, respectively.

At the time, Facebook users in Asia or Africa had a completely different experience to their counterparts in the U.S. Their connection to a Facebook server had to travel halfway around the world, while users in the U.S. or Canada could enjoy nearly instantaneous service. To combat this, larger companies like Google, Facebook, Netflix, and others began storing their content physically closer to users through CDNs, or “content delivery networks.”

Instead of simply owning the CDNs that host your data, why not own the literal fiber cable that connects servers from the United States to the rest of the world?

Most of the world’s submarine cable capacity is now either partially or entirely owned by a FAANG company—meaning Facebook (Meta), Amazon, Apple, Netflix, or Google (Alphabet).

Google, which owns a number of sub-sea cables across the Atlantic and Pacific, can deliver hundreds of terabits per second through its infrastructure.

In other words, these applications have become so popular that they have had to leave traditional internet infrastructure and operate their services within their own private networks. These networks not only handle the physical layer, but also create new transfer protocols —totally disconnected from IP or TCP. Data is transferred on their own private protocols, essentially creating digital fiefdoms.

SpaceX’s Starlink is already unlocking a completely new way of providing service to millions. Its data packets, which travel to users via radio waves from low earth orbit, may soon be one of the fastest and most economical ways of delivering internet access to a majority of users on Earth. After all, the distance from LEO to the surface of the Earth is just a fraction of the length of subsea cables across the Atlantic and Pacific oceans.

What is next?

5
Incantations (josvisser.substack.com)
 

Incantations

Metadata

Highlights

The problem with incantations is that you don’t understand in what exact circumstances they work. Change the circumstances, and your incantations might work, might not work anymore, might do something else, or maybe worse, might do lots of damage. It is not safe to rely on incantations, you need to move to understanding.

 

We can best view the method of science as the use of our sophisticated methodological toolbox

Metadata

Highlights

Scientific, medical, and technological knowledge has transformed our world, but we still poorly understand the nature of scientific methodology.

scientific methodology has not been systematically analyzed using large-scale data and scientific methods themselves as it is viewed as not easily amenable to scientific study.

This study reveals that 25% of all discoveries since 1900 did not apply the common scientific method (all three features)—with 6% of discoveries using no observation, 23% using no experimentation, and 17% not testing a hypothesis.

Empirical evidence thus challenges the common view of the scientific method.

This provides a new perspective to the scientific method—embedded in our sophisticated methods and instruments—and suggests that we need to reform and extend the way we view the scientific method and discovery process.

In fact, hundreds of major scientific discoveries did not use “the scientific method”, as defined in science dictionaries as the combined process of “the collection of data through observation and experiment, and the formulation and testing of hypotheses” (1). In other words, it is “The process of observing, asking questions, and seeking answers through tests and experiments” (2, cf. 3).

In general, this universal method is commonly viewed as a unifying method of science and can be traced back at least to Francis Bacon's theory of scientific methodology in 1620 which popularized the concept

Science thus does not always fit the textbook definition.

Comparison across fields provides evidence that the common scientific method was not applied in making about half of all Nobel Prize discoveries in astronomy, economics and social sciences, and a quarter of such discoveries in physics, as highlighted in Fig. 2b. Some discoveries are thus non-experimental and more theoretical in nature, while others are made in an exploratory way, without explicitly formulating and testing a preestablished hypothesis.

We find that one general feature of scientific methodology is applied in making science's major discoveries: the use of sophisticated methods or instruments. These are defined here as scientific methods and instruments that extend our cognitive and sensory abilities—such as statistical methods, lasers, and chromatography methods. They are external resources (material artifacts) that can be shared and used by others—whereas observing, hypothesizing, and experimenting are, in contrast, largely internal (cognitive) abilities that are not material (Fig. 2).

Just as science has evolved, so should the classic scientific method—which is construed in such general terms that it would be better described as a basic method of reasoning used for human activities (non-scientific and scientific).

An experimental research design was not carried out when Einstein developed the law of the photoelectric effect in 1905 or when Franklin, Crick, and Watson discovered the double helix structure of DNA in 1953 using observational images developed by Franklin.

Direct observation was not made when for example Penrose developed the mathematical proof for black holes in 1965 or when Prigogine developed the theory of dissipative structures in thermodynamics in 1969. A hypothesis was not directly tested when Jerne developed the natural-selection theory of antibody formation in 1955 or when Peebles developed the theoretical framework of physical cosmology in 1965.

Sophisticated methods make research more accurate and reliable and enable us to evaluate the quality of research.

Applying observation and a complex method or instrument, together, is decisive in producing nearly all major discoveries at 94%, illustrating the central importance of empirical sciences in driving discovery and science.

 

How much are your 9's worth?

Metadata

Highlights

All nines are not created equal. Most of the time I hear an extraordinarily high availability claim (anything above 99.9%) I immediately start thinking about how that number is calculated and wondering how realistic it is.

Human beings are funny, though. It turns out we respond pretty well to simplicity and order.

Having a single number to measure service health is a great way for humans to look at a table of historical availability and understand if service availability is getting better or worse. It’s also the best way to create accountability and measure behavior over time…

… as long as your measurement is reasonably accurate and not a vanity metric.

Cheat #1 - Measure the narrowest path possible.

This is the easiest way to cheat a 9’s metric. Many nines numbers I have seen are various version of this cheat code. How can we create a narrow measurement path?

Cheat #2 - Lump everything into a single bucket.

Not all requests are created equal.

Cheat #3 - Don’t measure latency.

This is an availability metric we’re talking about here, why would we care about how long things take, as long as they are successful?!

Cheat #4 - Measure total volume, not minutes.

Let’s get a little controversial.

In order to cheat the metric we want to choose the calculation that looks the best, since even though we might have been having a bad time for 3 hours (1 out of every 10 requests was failing), not every customer was impacted so it wouldn’t be “fair” to count that time against us.

Building more specific models of customer paths is manual. It requires more manual effort and customization to build a model of customer behavior (read: engineering time). Sometimes we just don’t have people with the time or specialization to do this, or it will cost to much to maintain it in the future.

We don’t have data on all of the customer scenarios. In this case we just can’t measure enough to be sure what our availability is.

Sometimes we really don’t care (and neither do our customers). Some of the pages we build for our websites are… not very useful. Sometimes spending the time to measure (or fix) these scenarios just isn’t worth the effort. It’s important to focus on important scenarios for your customers and not waste engineering effort on things that aren’t very important (this is a very good way to create an ineffective availability effort at a company).

Mental shortcuts matter. No matter how much education we try, it’s hard to change perceptions of executives, engineers, etc. Sometimes it is better to pick the abstraction that helps people understand than pick the most accurate one.

Data volume and data quality are important to measurement. If we don’t have a good idea of which errors are “okay” and which are not, or we just don’t have that much traffic, some of these measurements become almost useless (what is the SLO of a website with 3 requests? does it matter?).

What is your way of cheating nines? ;)

[–] [email protected] 2 points 2 years ago

nice thinking, TRIZ like.

[–] [email protected] 1 points 2 years ago

Genius is in simplicity

[–] [email protected] 7 points 2 years ago

interesting question, somehow i think that drones were launched from somewhere in Moscow

[–] [email protected] 5 points 2 years ago (1 children)
[–] [email protected] 20 points 2 years ago (6 children)
[–] [email protected] 4 points 2 years ago

midnight commander, especially if i need to delete files/dirs with '-' and non-ascii characters. i do it without thinking.

[–] [email protected] 9 points 2 years ago

read books, play games, watch tv, walk the dog, love my wife, sleep

[–] [email protected] 0 points 2 years ago

a bro and a sis, live in different countries all of us. crossed water and fire, internal conflicts from time to time, but if somebody dares to touch from the "outside" - we become one buddha palm ;)

[–] [email protected] 3 points 2 years ago (1 children)

death stranding

[–] [email protected] 1 points 2 years ago

Reading: Everything is Under Control by Robert Anton Wilson Listening: Galaxy Outlaws: The Complete Black Ocean Mobius Missions by J.S. Morin, Mikael Naramore (Narrator)

[–] [email protected] 1 points 2 years ago

trying to pickle cabbage today, hope will not die of poisoning ;)

[–] [email protected] 1 points 2 years ago
view more: ‹ prev next ›