Home > Backend Development > PHP Tutorial > The 8 Fallacies of Distributed Computing for PHP Developers

The 8 Fallacies of Distributed Computing for PHP Developers

Lisa Kudrow
Release: 2025-02-27 08:27:13
Original
747 people have browsed it

The 8 Fallacies of Distributed Computing for PHP Developers

Eight misunderstandings that PHP developers need to be vigilant about in building distributed applications

Peter Deutsch proposed seven misunderstandings about distributed computing in 1997, and later James Gosling (the father of Java) added another one. These misunderstandings are crucial for PHP developers because we build distributed applications every day: mashup, applications that interact with SOAP and REST services, user authentication through Facebook, Google or Twitter APIs, retrieving information from remote databases and cache services, and more. What we build is a distributed computing application. Therefore, it is crucial to understand these eight misunderstandings and their implications.

Key points:

  • Eight distributed computing misunderstandings proposed by Peter Deutsch and James Gosling are crucial to PHP developers: network reliability, zero latency, unlimited bandwidth, network security, unchanged topology, only one administrator, zero transmission cost, and network isomorphism.
  • Despite significant improvements in network reliability and bandwidth, these factors are not perfect. Developers must foresee potential failures and incorporate corresponding processing strategies in application design and deployment.
  • Cybersecurity is always an important issue, and developers must prioritize good security practices and evaluate the security measures of their partners.
  • Developers should be prepared to deal with changes in topology, the possibility of changing suppliers, and the costs associated with data transmission. They should also take a flexible approach that is able to handle multiple databases and data sources without assuming that the network is isomorphic.
  1. The network is reliable

This is obviously incorrect. Although network latency has decreased and bandwidth has increased significantly since 1995, it is wrong to say that the network is reliable. Suppose we build a simple application that doesn't use too many services - a PHP application that uses MySQL as the backend. There seems to be no problem. However, suppose we later decided to use a MySQL hosting provider like Xeround to meet our database needs. Even with good scalability and high availability, what if there is a problem with their system? What if their infrastructure is hit by a DDoS attack or has downtime due to internal issues? We often hear about 99.999% uptime, but that's still not 100%. With the surge in services and the bandwidth that is usually highly available today, it's easy to forget that nothing is perfect. How do you consider service failures in your application?

  1. Delay is zero

Although the delay may be very low, it is indeed much lower than a few years ago, it will never be zero. To quote Arnon Rotem-Gal-Oz in his article "Detailed Explanation of Distributed Computation Misunderstandings": At a speed of approximately 300,000 km/s (3.6 * 10E12 teraangstrom/two weeks), even if the processing is done in real time, the ping from Europe to the United States and then returns will take at least 30 milliseconds.

Is this a bad thing? Mixed. Depending on the structure of our application and the resources available, we can greatly mitigate latency issues. We can host our applications using services like Amazon Web Services and take advantage of S3 so that our data is located in multiple regions around the world, bringing it closer to our end users and reducing latency for applications on the web. But even if we can reduce the latency, we can't eliminate it. We can take a range of methods and architectures to reduce its impact on us, but no matter what we do, it will always be there. Have you considered this when designing your application?

  1. Unlimited bandwidth

Is the bandwidth really unlimited? If so, what is the price of infinite? When we consider that the network is increasingly moving toward mobile, everything old is reborn. I'm not saying we started over with the speed of dial-up access, the updated 4G network is much faster than the earlier 2G and 3G networks. However, even their peak data rates are currently lower than standard broadband connections. Furthermore, with the popularity of mobile broadband, the number of potential users seeking to use our services (we all want to be popular, at least some of Facebook’s success) is growing at an alarming rate. Consider these statistics from mobithinking:

  • There are 5.9 billion mobile users.
  • 1.2 billion mobile network users with 3G coverage.
  • Mobile devices account for 8.49% of global clicks.

In view of this, it is fair to say that while bandwidth rates and their penetration around the world are increasing, user growth rates offset this. Further, with the enormous flexibility provided by mobile broadband, clear temporary service consumption naturally emerged. Are you ready for the potentially huge load on the service? Can you handle the peaks caused by this availability?

  1. Cybersecurity

I think it's fair to say that it's, and will always be wrong, without too much detail. If you have any questions, maybe you should talk to a LinkedIn or eHarmony member. When we design and deploy our applications, how much effort do we put into security in the hosted locations of the application (such as Rackspace, PagodaBox, or cloudControl) and in the design of the application itself? According to SecurityAffairs, Prolexic report:

  • Malicious packet traffic targeting the financial services industry increased by 3,000% month-on-month.
  • The amount of malicious traffic data for the financial services industry in the fourth quarter of 2011 was 19.1TB and 14 billion packets, an increase in 2012.
  • In 2012, the amount of data identified and mitigated was 65TB, and the data packets were 1.1 trillion, 80 times that of 2011.

Given that the network is not secure, we need to make sure we are using good security practices as a matter of course. Given the vast amount of good advice from Chris Shiflett's blog, Essential PHP Security, PHP Security Alliance, and more, it's hard not to know how to incorporate security into our application core and why. What is your security practice? Have you evaluated the vendors you deployed?

  1. Topology remains unchanged

Not? Really? It won't change, or do we just don't know? When we host the app to others, we just don't know. If providers reconfigure their data center, upgrade it, tweak it, the topology changes for whatever reason. In view of the aforementioned increase in smartphone usage, topology changes frequently. From the user and provider perspective, the topology changes almost every day! If the topology changes and the external services it depends on can no longer access, resulting in, for example, inability to access the database, then this is definitely a problem. However, if there is a change within our provider and the application continues to run, then it may not be a problem. Of course, it's easy to write a small and hosted application in simple configuration. But applications will change, especially those that are increasingly popular. Have you considered topological changes in your design? How do you explain or deal with failures in application design and deployment design?

  1. Only one administrator

"But my application is hosted by a single service provider. They provide operating system, database and web server support," you said. OK, assuming that's your application, is it really only one administrator? If there really was only one administrator, would you really believe that the provider would handle your application? If they are sick or on vacation, I hate thinking about what will happen. Often, there will be at least a few administrators, although each administrator's technical and broader acumen may vary. Strategies, such as network intrusion detection and other security policies, should be developed, but there is no guarantee that they will all comply with the same thoroughness and due diligence. Given the numerous hosting providers available today and the little time it takes to update DNS records, we have a lot of options and flexibility, if one provider doesn’t meet our needs and expectations, we can turn to another. Have you considered how this will affect you? What if you can’t easily change suppliers? What if you have a large number of vendor lock-in, or the mobile cost is high? What if your application architecture is not flexible enough? What steps can you do to mitigate this kind of risk?

  1. Transmission cost is zero

As with all statements so far, the validity of this is unlikely. If the servers that support our applications are located in the same rack in the same data center, the transfer cost can be greatly reduced, but in terms of time costs. What about the cost of money? Yes, we can scale up and down infinitely resiliently as needed, and we can store our application data between geo-dispersed data centers so that it is as close to our end users as possible, but at what’s the price? What is the architectural composition of your application or service? Is it close to zero in terms of cost or time? If you can reduce one, will it add another?

  1. Network isomorphism

Unlike other misunderstandings, I think as PHP developers, we are born to understand this. We host our applications on Windows, Linux, Solaris, BSD and Mac OS X servers. We use MySQL, SQLServer, SQLite, PostgreSQL, mongoDB, Hadoop, and Oracle to store data. We use external services through XML or JSON that require different interfaces. As a multi-operating system and multi-service community, we can say that since the early days, we have never expected an isomorphic network. But the question still needs to be asked, is your method flexible? Can you handle multiple databases and data sources? Do you use relevant design patterns, such as abstract factories, to consume data from various sources and types using transparent code interfaces? Or if you need to do something as simple as switching from XML to JSON, will your code break?

Conclusion

I think that as a PHP developer, the eight misunderstandings of distributed computing are as important as before. Given the vast amount of information and resources available, we are in a very advantageous position to understand them and mitigate the risks that arise. What do you think? Do you consider them when developing applications and services? How do you think these eight misunderstandings affect your application?

(The picture remains unchanged)

The above is the detailed content of The 8 Fallacies of Distributed Computing for PHP Developers. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template