Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 106-120.
Table of contents
Explore REST principles, understand Copy-on-write, and differentiate between git merge and git rebase. Navigate scenarios like correcting commits in version control and grasp database transaction mechanics.
Understand database normalization, denormalization, and diverse relationship types. Delve into referential integrity control, primary and foreign keys, as well as the distinctions between primary and unique keys.
Explore SQL JOINs, aggregate functions, and the significance of the GROUP BY operator. Finally, grasp the distinctions between SQL's WHERE and HAVING clauses.
106. What's your understanding of REST?
Question: What's your understanding of REST?
Formal Explanation: REST (Representational State Transfer) is an architectural style for designing networked applications, primarily focused on distributed systems like the World Wide Web. It emphasizes a set of constraints and principles that allow resources to be accessed and manipulated through standardized and well-defined interactions. These interactions are typically carried out using HTTP methods, and RESTful APIs provide a way for clients to communicate with servers over the internet.
Simplified Explanation: REST is a way to build web services and APIs that follow a set of rules. It makes it easy for different software systems to communicate with each other over the internet using a common language.
Detailed Explanation: REST is a set of guidelines that help developers create scalable and easily maintainable web services. Some key aspects of REST include:
Resources: REST treats everything as a resource. Resources can be anything, like an article, a user, an image, or any other entity that can be accessed through a URL.
Stateless: Each request from a client to the server must contain all the information needed to understand and fulfill the request. The server should not store any client-specific information between requests. This leads to better scalability and easier caching.
HTTP Methods: RESTful APIs use HTTP methods (GET, POST, PUT, DELETE) to perform operations on resources. Each method has a specific purpose: GET retrieves data, POST adds new data, PUT updates data, and DELETE removes data.
Uniform Interface: REST APIs have a consistent way of interacting with resources using well-defined URLs and HTTP methods. This uniformity simplifies the development process and improves the overall usability of the API.
Representation: Resources are represented using a standard format, often JSON or XML. This allows different clients and servers to understand the data being exchanged.
Caching: RESTful services can take advantage of HTTP caching mechanisms to improve performance by reusing previously fetched data.
HATEOAS: Hypermedia as the Engine of Application State is a concept in REST that includes links in the API responses, allowing clients to navigate the application's state and discover available actions dynamically.
Example: Consider a blogging platform that exposes a RESTful API for managing articles. Clients can use HTTP methods to interact with this API. For instance:
To retrieve a list of articles:
GET /api/articles
To create a new article:
POST /api/articles
To update an article:
PUT /api/articles/{article_id}
To delete an article:
DELETE /api/articles/{article_id}
Conclusion: REST is a set of principles that guide the design of APIs for web applications, ensuring simplicity, scalability, and interoperability. By following RESTful principles, developers can create APIs that are easy to understand, use, and maintain.
107. What is Copy-on-write?
Formal Explanation: Copy-on-write (COW) is a memory optimization technique used in computer programming and operating systems to efficiently manage resources, especially memory. It involves delaying the duplication of data until it is necessary, minimizing unnecessary copying and reducing memory consumption.
Simplified Explanation: Copy-on-write is a smart way to save memory by not making unnecessary copies of data. Instead, it shares the data until someone tries to change it.
Detailed Explanation: Copy-on-write is often used when dealing with large data structures, like arrays or strings. Here's how it works:
Initial Sharing: When a piece of data is created or assigned to a variable, instead of immediately copying the data, the system keeps track of how many references (variables) point to the same data.
Modification Request: If one of the variables tries to modify the data, the system checks how many references exist. If there's only one reference, the data is duplicated (copied) so that the modification doesn't affect the original data. If there are multiple references, the data is first copied to a new location, and then the modification is made to the new copy. This ensures that only the variable requesting the change sees the updated data.
Reduced Memory Usage: Copy-on-write allows multiple variables to share the same data until someone needs to change it. This avoids unnecessary copying and reduces memory consumption.
Copy-on-write is used in various scenarios, such as when creating child processes, handling strings, or managing large data structures in memory. It's particularly useful when dealing with data that might be duplicated multiple times but doesn't need to be modified by every reference.
Example: Suppose you have two variables, $a
and $b
, both pointing to the same large array. If you modify the array using $a
, the system will create a new copy of the array before applying the modification. This ensures that the original array pointed to by $b
remains unchanged.
$a = [1, 2, 3]; // Original array
$b = $a; // Both variables point to the same array
// Modify the array through $a
$a[0] = 10; // A new copy of the array is created before modification
// $a contains [10, 2, 3], $b still contains [1, 2, 3]
Conclusion: Copy-on-write is a memory optimization technique that delays the duplication of data until it is necessary, allowing multiple variables to share the same data until modification is required. This reduces unnecessary copying and improves memory efficiency.
108. What is the difference between git merge
and git rebase
?
Simplified Explanation: git merge
combines changes from one branch into another, while git rebase
moves your changes to the tip of another branch.
Detailed Explanation:
Git Merge: When you use
git merge
, Git creates a new commit that has two parent commits, showing the divergence and the merge point. It retains the commit history of both branches.Git Rebase: With
git rebase
, your commits are temporarily removed, the branch is updated to match the target branch, and then your commits are reapplied on top. This makes it look as if your work happened on top of the target branch all along, creating a linear commit history.
The choice between merge and rebase depends on the workflow and the desired commit history. git merge
is useful for preserving the history of all changes, including the branching points. git rebase
is favored for creating a linear, cleaner commit history, but it rewrites commit IDs, which can be problematic if others are working on the same branch.
Example: Suppose you have a feature branch (feature
) and the main branch (main
). You want to integrate your changes from feature
into main
.
Using
git merge
:git checkout main git merge feature
This creates a new merge commit in the
main
branch that includes changes fromfeature
.Using
git rebase
:git checkout feature git rebase main
This moves your commits from
feature
on top of the latest commit inmain
.
Conclusion: git merge
combines changes and creates a merge commit, maintaining the original branching points. git rebase
moves your commits to the tip of another branch, resulting in a linear commit history but rewriting commit IDs. The choice between them depends on the desired commit history and collaboration considerations.
109. You're working in a local feature branch that was created from dev
. You made changes, committed them, and pushed to dev
. However, you realized that you included an unnecessary file in the commit, and for some reason, you cannot delete the file. How can you correct this situation?
If you're unable to delete the extra file, you can still fix the issue by splitting the commit using an interactive rebase and then updating the remote repository with the corrected changes.
Step-by-step Guide:
Check Your Situation: Make sure you are in the correct local branch (
feature
) and the remotedev
branch has your erroneous commit.Start Interactive Rebase: Start an interactive rebase session to modify the last commit:
git rebase -i HEAD~2
Edit the Commit: In the rebase editor, change "pick" to "edit" for the commit that needs correction.
Remove the File: After the rebase pauses at the commit you want to edit, remove the unnecessary file:
git rm unnecessary_file.txt
Continue Rebase: Continue the rebase after making changes:
git rebase --continue
Force Push to Remote: Since you've already pushed to the remote repository, you'll need to force push the corrected commit.
git push origin feature --force
Conclusion: If you can't delete the extra file, you can still fix the issue by using an interactive rebase to split the commit and remove the file from the commit history. Then, force push the corrected changes to the remote repository. However, be cautious with force push as it rewrites history and can affect others working on the same branch.
110. What is a database transaction?
Formal Explanation: A database transaction is a sequence of one or more operations that are treated as a single unit of work. Transactions ensure data integrity by providing a way to ensure that a series of operations either complete successfully or leave the database in a consistent state if an error occurs. Transactions follow the ACID properties: Atomicity, Consistency, Isolation, and Durability.
Simplified Explanation: A database transaction is a group of operations that are treated as a single unit. It's like a way to bundle changes together, ensuring that either all changes happen or none of them do. This helps keep data safe and consistent.
Detailed Explanation: Imagine you're transferring money from one bank account to another. This involves two steps: deducting the amount from one account and adding it to another. If something goes wrong after deducting from the first account but before adding to the second, you could lose money or leave both accounts in an inconsistent state.
A transaction ensures that these steps are treated as one unit. It's like putting the steps in a protective bubble. If anything fails during the transaction, like a server crash or power outage, the bubble pops, and everything is rolled back to the way it was before the transaction started. This guarantees data integrity.
Transactions follow ACID properties:
Atomicity: All or nothing. Either every step in the transaction succeeds, or none of them do.
Consistency: The database starts in a consistent state and ends in a consistent state, even if a transaction fails.
Isolation: Transactions are isolated from each other, preventing interference. If one transaction is working, others can't peek at its unfinished changes.
Durability: Once a transaction is committed, its changes are permanent and safe, even if the system crashes.
For example, when you book a flight ticket online, your payment, seat reservation, and confirmation email are all part of a transaction. If anything fails, you won't be charged, and your seat won't be reserved.
In PHP, transactions are usually implemented using database-specific methods like BEGIN
, COMMIT
, and ROLLBACK
. They help maintain data integrity, especially in applications where multiple users interact with a database concurrently.
Conclusion: A database transaction is a way to group database operations together as a single unit. It ensures that either all the operations are completed successfully, or none of them are, maintaining data integrity and consistency. Transactions are crucial for reliable database operations in various applications.
111. What is normalization in the context of databases?
Formal Explanation: Normalization is a process in database design that involves organizing data in a way that reduces redundancy and improves data integrity. It aims to eliminate data anomalies, such as update anomalies, insert anomalies, and delete anomalies, by dividing a database into smaller related tables and ensuring that each table follows specific rules, called normal forms.
Simplified Explanation: Normalization is like tidying up your messy room. It's a way to organize data in a database so that you don't have the same information stored in multiple places. This makes the data easier to manage and helps prevent mistakes.
Detailed Explanation: Imagine you're designing a database to store information about students and their courses. Without normalization, you might create a single table with all the data: student names, course names, and instructors. But what if a student takes multiple courses? You'd end up repeating their name and other information for each course, leading to redundancy.
Normalization helps solve this problem. It suggests breaking down the data into smaller related tables, each with a specific purpose. In this case, you'd have separate tables for students, courses, and instructors. You'd use unique IDs to link the tables together.
Normalization follows a set of rules called normal forms. The most common ones are First Normal Form (1NF), Second Normal Form (2NF), and Third Normal Form (3NF), with each level building on the previous one. These rules guide you in deciding how to split and organize the data to eliminate redundancy and anomalies.
For example, consider a library system. Instead of storing all the book information in one giant table, you'd have separate tables for authors, books, and borrowers. This way, if an author's name changes, you only need to update it in one place.
Normalization isn't always about splitting data; it's about ensuring that each piece of data is stored in the right place without duplication. It helps maintain data accuracy and consistency and makes databases more efficient.
Conclusion: Normalization is a process in database design that involves organizing data to reduce redundancy and improve data integrity. It's like tidying up data so that it's stored in the right places without duplication. Normalization follows specific rules called normal forms and helps prevent data anomalies, making databases more efficient and reliable.
112. What is denormalization? Why is it needed?
Formal Explanation: Denormalization is a database design technique that involves intentionally introducing redundancy into the database tables. It's done to improve performance and query efficiency by reducing the need for complex joins and improving data retrieval speed. Denormalization is often used in situations where read operations significantly outnumber write operations.
Simplified Explanation: Denormalization is like making a quick access copy of your notes before an exam. It's a way to add some repeated information to your database to make it faster when you need to look up data.
Detailed Explanation: Imagine you have a database with separate tables for customers and orders. Each order has a customer ID that links it to a customer in the customers table. This is a normalized structure that prevents data duplication.
But what if you're frequently running reports that need to show order details along with customer names? With a normalized database, you'd need to join the customers and orders tables every time, which can slow things down.
This is where denormalization comes in. You might decide to add the customer name directly to the orders table. Now, when you run a report, you don't need to join tables; you can fetch the required data from a single table. This speeds up the process, especially for read-heavy applications.
Denormalization is a trade-off. It improves read performance but makes write operations (like adding or updating data) more complex because you need to ensure that redundant data stays consistent. It's useful in scenarios where data doesn't change very often, but you need quick access to it.
For instance, think about an e-commerce website. The product information doesn't change frequently, but customers are browsing and buying all the time. By denormalizing and storing product details along with order information, you can quickly display order history without costly joins.
Conclusion: Denormalization is a technique in database design that involves introducing redundancy to improve read performance and query efficiency. It's like making quick notes to speed up information retrieval. While denormalization can enhance performance, it also requires careful management to ensure data consistency. It's typically used in scenarios where read operations are more frequent than write operations.
113. What types of relationships exist in a database?
Formal Explanation: In database design, relationships define how tables are connected and interact with each other. There are three main types of relationships: one-to-one, one-to-many (or many-to-one), and many-to-many.
Simplified Explanation: Think of database relationships like friendships: you can have one best friend, one friend who has many other friends, or a group of friends who all know each other.
Detailed Explanation:
One-to-One Relationship: This is like having a unique ID card. Each person has their own card, and no one shares it. In a database, one record in one table is related to one record in another table.
One-to-Many Relationship: Imagine having a favorite musician and they have many fans. Each fan admires one musician, but that musician can have many fans. Similarly, in a database, one record in a table (e.g., musician) is related to multiple records in another table (e.g., fans).
Many-to-Many Relationship: Think of a social network where users can have many friends, and those friends can have many other friends too. This is like a party where everyone knows everyone. In a database, multiple records in one table can be related to multiple records in another table.
Database relationships are essential for organizing and linking information efficiently. For instance, in an e-commerce site, each product can have multiple reviews (one-to-many), and each review can be associated with multiple users who wrote it (many-to-many).
114. What does it mean when a DBMS supports referential integrity control?
Formal Explanation: When a database management system (DBMS) supports referential integrity control, it ensures that relationships between tables are maintained accurately. It enforces rules that prevent orphaned or invalid data due to broken relationships.
Simplified Explanation: Imagine your school yearbook. If a student moves away, their photo should be removed from the class photo too. Referential integrity control is like that caretaker who makes sure that all photos in the yearbook match the real students in the class.
Detailed Explanation: In a database, tables can have relationships based on keys. For example, a "Customers" table might be related to an "Orders" table using a customer ID. If a customer is deleted, referential integrity control ensures that their corresponding orders are also deleted or handled in a specified way.
When a DBMS supports referential integrity, it prevents or handles scenarios like:
Deleting a record with related child records (CASCADE delete).
Updating a key value in a parent record and updating it in all related child records.
Rejecting changes that would violate relationships.
This control prevents data inconsistencies, orphaned records, and other issues that could arise from broken relationships. It helps maintain data accuracy and reliability.
Conclusion: When a database management system supports referential integrity control, it ensures that relationships between tables are maintained correctly. This prevents data inconsistencies and orphans, helping the database stay organized and reliable. Just like a yearbook caretaker maintains accurate class photos, referential integrity control maintains accurate database relationships.
115. What are primary and foreign keys in a database?
Formal Explanation: A primary key is a unique identifier for a record in a table, ensuring each record's distinctiveness. A foreign key is a field that establishes a link between tables, referencing the primary key of another table to maintain data integrity.
Simplified Explanation: Think of a primary key as a name tag for each student, making sure no two students have the same name. A foreign key is like a student's friend list, connecting students to others by their name tags.
Detailed Explanation: In a database, a primary key uniquely identifies each record in a table. For example, in a "Students" table, the student ID might be the primary key. It ensures that no two students have the same ID.
A foreign key, on the other hand, creates a relationship between two tables. It references the primary key of another table. For instance, in an "Orders" table, a customer ID might be a foreign key linking to the "Customers" table. This maintains data integrity by ensuring that an order is associated with an existing customer.
In simpler terms, a primary key is a unique identifier for a table's records, while a foreign key connects one table to another by referencing the primary key of the other table.
Example: Consider two tables: "Students" and "Courses." In the "Students" table, the primary key is the student ID. In the "Courses" table, a foreign key "student_id" references the "Students" table's primary key. This link ensures that each course is associated with a valid student.
Conclusion: A primary key is a unique identifier for records in a table, while a foreign key establishes relationships between tables by referencing primary keys. They work together to maintain data integrity and enable structured data retrieval. Just like name tags and friend lists, primary and foreign keys ensure order and connections in a database.
116. What is the difference between primary and unique keys in a database?
Formal Explanation: A primary key is used to uniquely identify each record in a table and is essential for data integrity. A unique key ensures that the values in a column (or a set of columns) are unique, but multiple unique keys can exist in a table.
Simplified Explanation: Think of a primary key like a student ID card that ensures each student has a unique identity. A unique key is like a special pen name that guarantees no two students share the same pen name, but there can be different pen names.
Detailed Explanation: In a database, a primary key uniquely identifies each record in a table. It's a crucial component for maintaining data accuracy and integrity. For example, in a "Students" table, the student ID could be the primary key.
A unique key, on the other hand, enforces uniqueness within a column or a set of columns. It ensures that no two rows have the same values in the specified columns. Unlike the primary key, which uniquely identifies records, there can be multiple unique keys in a table. This is useful for scenarios where you want to enforce uniqueness but not necessarily identify each record.
In simpler terms, a primary key is like a fingerprint for records, making sure each record is unique. A unique key is more like a special characteristic that ensures distinctness within specific columns.
Example: Consider an "Employees" table. The "employee_id" could be the primary key, guaranteeing each employee has a unique ID. The "email" column could have a unique key, ensuring that no two employees share the same email.
117. What are the types of JOINs and how do they differ?
Formal Explanation: There are four main types of JOINs in SQL: INNER JOIN, LEFT JOIN (or LEFT OUTER JOIN), RIGHT JOIN (or RIGHT OUTER JOIN), and FULL JOIN (or FULL OUTER JOIN). They determine how data from multiple tables is combined based on matching or non-matching rows.
Simplified Explanation: Think of JOINs as ways to combine information from different tables in a database. INNER JOIN takes only matching records, LEFT JOIN takes all records from the left table and matching records from the right table, RIGHT JOIN is like LEFT JOIN but for the right table, and FULL JOIN combines all records from both tables.
Detailed Explanation: JOINs in SQL help you retrieve data from multiple related tables. Here are the main types:
- INNER JOIN: Retrieves only the rows that have matching values in both tables. It essentially filters out non-matching rows.
SELECT customers.name, orders.order_date
FROM customers
INNER JOIN orders ON customers.id = orders.customer_id;
- LEFT JOIN (LEFT OUTER JOIN): Retrieves all rows from the left table and matching rows from the right table. If there's no match in the right table, NULL values are returned for right table columns.
SELECT customers.name, orders.order_date
FROM customers
LEFT JOIN orders ON customers.id = orders.customer_id;
- RIGHT JOIN (RIGHT OUTER JOIN): Similar to LEFT JOIN, but retrieves all rows from the right table and matching rows from the left table. Non-matching rows from the left table result in NULL values for left table columns.
SELECT customers.name, orders.order_date
FROM customers
RIGHT JOIN orders ON customers.id = orders.customer_id;
- FULL JOIN (FULL OUTER JOIN): Retrieves all rows from both tables, including matching and non-matching rows. If there's no match in either table, NULL values are returned for the respective table's columns.
SELECT customers.name, orders.order_date
FROM customers
FULL JOIN orders ON customers.id = orders.customer_id;
In simpler terms, think of a JOIN as a way to create a new table by combining rows from multiple tables based on specified conditions. INNER JOIN is like using a filter to only keep matching items, LEFT JOIN is like keeping all items from the left and adding matching ones from the right, RIGHT JOIN does the same for the right table, and FULL JOIN combines everything from both tables.
Example: Consider a "Customers" table and an "Orders" table. An INNER JOIN would give you only the customers who placed orders. A LEFT JOIN would give you all customers and their orders, with NULL for customers who haven't placed orders.
118. What are SQL aggregate functions? Can you provide some examples?
Formal Explanation: SQL aggregate functions are built-in functions that perform calculations on a set of values and return a single value as a result. These functions are often used with the GROUP BY clause to summarize data from multiple rows into a single value.
Simplified Explanation: Imagine you have a lot of data and want to know something about it, like the sum, average, or maximum value. Aggregate functions do these calculations and give you a single answer from a bunch of numbers.
Examples: Here are some common aggregate functions:
- SUM: Adds up the values in a column.
SELECT SUM(sales_amount) FROM orders;
- AVG: Calculates the average of the values in a column.
SELECT AVG(age) FROM employees;
- COUNT: Counts the number of rows.
SELECT COUNT(*) FROM customers;
- MAX: Finds the highest value in a column.
SELECT MAX(price) FROM products;
- MIN: Finds the lowest value in a column.
SELECT MIN(quantity) FROM inventory;
- GROUP BY: Used with aggregate functions to group data and perform calculations within each group.
SELECT department, AVG(salary) FROM employees GROUP BY department;
Detailed Explanation: Aggregate functions are like math helpers for your data. When you need to summarize or analyze a bunch of values, these functions come in handy. You can use them to answer questions like "How much total revenue did we make?", "What's the average age of our customers?", or "How many products do we have in stock?".
The examples above show how to use aggregate functions in SQL queries. You can combine them with other clauses like WHERE and GROUP BY to get specific results. GROUP BY is especially useful when you want to calculate aggregates for different groups within your data, like finding the average salary for each department in a company.
In simpler terms, aggregate functions are like magic tools that help you quickly find answers about your data. Whether you're dealing with sales numbers, employee ages, or inventory quantities, these functions do the heavy lifting of calculations for you.
Conclusion: Aggregate functions are essential tools in SQL to perform calculations on large sets of data. They help you derive meaningful insights from your database by summarizing information into single values. Whether you're crunching numbers or analyzing trends, aggregate functions make data analysis easier and more efficient.
119. Why is the GROUP BY operator used?
Formal Explanation: The GROUP BY operator in SQL is used to group rows from a table based on one or more columns and apply aggregate functions to the grouped data. It's commonly used to perform calculations and analysis on subsets of data within a table.
Simplified Explanation: Imagine you have a large list of data and you want to organize it into categories so you can see summary information for each category. GROUP BY is like putting your data into folders based on specific characteristics.
Examples: Let's say you have a database with a table of sales transactions. Each transaction has a date, a product, and a sales amount. If you want to know the total sales amount for each product, you would use GROUP BY.
For example, to calculate the total sales amount for each product, you could use the following SQL query:
SELECT product, SUM(amount) as total_sales
FROM sales
GROUP BY product;
This query groups the sales transactions by product and calculates the sum of the sales amounts for each product.
Detailed Explanation: The GROUP BY clause is used in conjunction with aggregate functions like SUM, COUNT, AVG, MIN, and MAX to perform calculations on groups of rows that share common values in one or more columns. When you use GROUP BY, you're telling the database to group the rows that have the same values in the specified column(s) and apply the aggregate functions to the grouped data.
For instance, if you have a table of customer orders with columns like order_date, product_id, and quantity, you might want to know the total quantity of each product ordered on different dates. Using GROUP BY, you can group the orders by product_id and order_date and calculate the total quantity for each product on each date.
The GROUP BY operator is particularly useful in generating summary reports, analyzing trends, and performing calculations on specific subsets of data within a table. It allows you to see aggregated information instead of individual rows, making it easier to draw insights from your data.
Conclusion: The GROUP BY operator in SQL is used to group rows from a table based on specified columns and apply aggregate functions to the grouped data. It's used to summarize and analyze data within specific categories or groups, providing valuable insights for decision-making and reporting.
120. What is the difference between WHERE and HAVING in SQL?
Formal Explanation: The WHERE clause is used to filter rows before they are grouped or aggregated, while the HAVING clause is used to filter rows after they have been grouped and aggregated. WHERE works on individual rows before grouping, and HAVING works on grouped results after aggregation.
Simplified Explanation: WHERE is used to filter rows based on specific conditions before grouping and aggregation, while HAVING is used to filter results after they have been grouped and aggregated.
Examples: Imagine you have a sales table with columns for product, category, and sales amount. If you want to find products with sales amount greater than $100, you would use WHERE.
SELECT product, SUM(amount) as total_sales
FROM sales
WHERE amount > 100
GROUP BY product;
On the other hand, if you want to find categories with total sales greater than $1000, you would use HAVING.
SELECT category, SUM(amount) as total_sales
FROM sales
GROUP BY category
HAVING SUM(amount) > 1000;
Detailed Explanation: The WHERE clause is applied to individual rows before they are grouped and aggregated. It filters out rows that don't meet the specified conditions, and only the filtered rows are included in the grouping and aggregation process. This is typically used to narrow down the dataset to be processed.
The HAVING clause, on the other hand, is used to filter the results of grouped and aggregated data. It operates on the results of aggregation functions like SUM, COUNT, AVG, etc. HAVING is used to specify conditions that must be met by the aggregated values after grouping. It is applied after the grouping and aggregation are performed.
In the first example above, the WHERE clause filters out rows with sales amount less than or equal to $100 before calculating the total sales for each product. In the second example, the HAVING clause filters out categories with total sales less than or equal to $1000 after calculating the total sales for each category.
Previous articles of the series:
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 1-15.
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 16-30.
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 31-45.
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 46-60.
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 61-75.
Mastering the PHP Developer Interview: 100+ Technical Questions Answered. 91-105.