Inversion of Control

When writing code we need to be conscious of the interdependence of different objects or modules of code. This interdependence is called coupling. Tightly coupled code is highly dependant on another piece of code for it’s functioning. In an object oriented world we want to make each unique set of code (object or module) interchangeable so that it can be replaced easily. To do this we have to reduce the amount of interdependency between sets of code. This is what we are referring to when we mention decoupling code.

The Dependency Inversion Principle (DIP) is a specific form of decoupling modules of software. It’s so important a part of object oriented programming that it is the ‘D’ in SOLID. The principle itself states that high level modules should not be dependant on lower level modules but instead both should be dependent on abstractions. Also abstractions should not depend on details, the details should depend on the abstractions. The high level modules should be independent of the implementation details of the low level. The low level modules should be designed with the interaction in mind as it may need to change interfaces. However, the inversion of dependency does not mean that lower level layers depend on higher level layers. Both should depend on the abstract interface between them. This reduces coupling of components without adding more code or coding patterns.

Inversion of Control (IoC) is a principle within the DIP. It is a way of designing code to implement the Dependency Inversion Principle. In it the custom code written for the functionality of the application does not control the flow of control. Instead that is received from a more generic framework. In procedural programming the code calls reusable libraries to perform generic tasks, whereas with IoC the framework calls into the task-specific code. The idea is to increase the modularity by using a framework that knows common behaviors and fills in the specifics of performing tasks with the custom code.

There is a lot more ways to apply Dependency Inversion that just Inversion of Control. Diving deeper into these principles and patterns you see how when they are applied the SOLID principles interrelate and work together to form better and more maintainable software. Individually they seem like a lot of work to implement but as you start implementing one you’ll notice that the process to implement the others becomes easier

Episode Breakdown

12:50 Dependency Injection (DI)

Instead of having a module of code call the other modules it may need to function Dependency Injection passes those into it from the original caller of that module. A dependency then would be any service that is not in the module of code that is required for it’s functionality. This terminology can be a bit misleading. It’s not injecting a new dependency but rather a provider for that dependency. The idea of injection is that the module needing the service isn’t the one in charge of getting it, that is passed in when calling that module. The goal is to decouple objects so that code doesn’t have to be changed because something it depends on is changed.

There are four components of dependency injection. The service is the object that is depended upon in order for the module to function. The client is the module of code being called that is dependent on the service. Interfaces define how the client is able to use and interact with the service. The calling code responsible for constructing the service and injecting it into the client is the injector.

Rather than allowing the client to create or find the service, Dependency Injection passes that service in when constructing the module so that it is part of that module’s state. The client that is called is not allowed to create new or static methods. The responsibility for providing dependencies is delegated to the code calling the client or the injector. The client itself doesn’t need to know about the injecting code only the interfaces of the services being injected. It should not have any knowledge of the implementation of it’s dependencies. The client code should not have to change if the code behind the interface changes.

There are three common types of dependency injection allowing an object to receive a reference from an external caller. Constructor injection passes the dependencies into the client via a class constructor when constructing the client. Setter injection uses a public setter method on the client to set or inject the dependency. Interface injection has the dependency provide an injector method that will inject the dependency into a client when passed into it. For this to work the client must have an interface with an exposed or public setter that accepts the dependency. It is used to tell the injector how to talk to the client. Other types of frameworks exist for injecting dependencies. Some testing frameworks are not requiring clients to actively accept injection which makes testing legacy code possible. In Java it’s possible to use reflection to make private attributes public when testing. Some IoC replace dependencies instead of removing them.

22:00 Callbacks

A callback is code that is passed into a function as an argument with the expectation that it will call back to that code and execute it at a certain time in the execution of the function. Synchronous callbacks occur immediately whereas asynchronous callback may be executed at a later time. In languages like JavaScript functions are objects and can be passed into arguments of other functions or the return of a function, these are called higher-order functions.

Various languages support callbacks in differently, they may be implemented via subroutines, lambda expressions, blocks, or function pointers. A subroutine is a set of code that performs a specific task, packaged as a unit and used wherever that task is needed. Lambda functions or anonymous functions are functions not bound to an identifier. Anonymous functions tend to be arguments being passed into higher-order functions.

“Javascript objects are object-ish…”

Callbacks are designed and defined by how they control the flow of data at runtime. Blocking callbacks are synchronous in that they are invoked before the the function returns a value thus blocking the function from completing until the callback returns. Deferred callbacks are asynchronous on the other hand may be invoked after the function returns and are often used in event handling and I/O operations.

Implementation of callbacks varies based on the style and type of language you are using. In dynamic languages like JavaScript, Python, PHP, Perl, etc. functions are objects that can be passed into other function. Functional languages often treat functions as first class citizens allowing them to be passed as callbacks into other functions. Some languages allow functions to be passed as closures allowing them to access and modify locally defined variables. In older languages like C, C++, Pascal, and even assembly a machine-level pointer to a function can be passed into a function.

30:20 Schedulers

Scheduling is the process of assigning resources to complete assigned work. Schedulers work by scheduling and tracking batched tasks. A job or batch scheduler is used for controlling the execution of background tasks.

Operating System (OS) scheduling allows a computer to “multitask” while still using a single CPU. Schedulers are set up to to keep all resources busy so that multiple users can share the same system resources in an effective manner. The process scheduler decides which processes are allowed to run at a given time. It can pause running processes, start new processes, or move processes within the queue.

“This is high priority because it involves bourbon, this is low priority because it involves…gin.”

OS kernels have different schedulers for different levels of the process of accessing memory and resources. The long-term scheduler authorizes or delays processes from entering into the queue in main memory when a program attempts to execute. An I/O bound process spends most of it’s time doing Input/Output instead of computations. Whereas, a CPU bound process spends most of it’s time doing computations.

The medium-term scheduler is responsible for swapping processes between main memory and secondary or hard disk memory. Processes that are swapped out to secondary memory tend to be low priority or inactive over a period of time. This is done to free up main memory for higher priority or faster processes. Slower, lower priority processes are swapped back in when more memory is available.

“This is the issue I had with the craptop, because Windows is super freakin’ heavy.”

The short-term scheduler or CPU scheduler controls the in-memory processes and determines which is to be executed by the CPU. It makes more frequent decisions that the previous schedulers. Also it can forcibly remove processes from the CPU, this is called being preemptive.

Preemptive schedulers reply on an interval timer that runs in the kernel. A dispatcher is the module that hands over control of the CPU to the the process once it is selected by the short-term scheduler. Dispatchers context switch by saving the state of process that was running then replaces it with a new process. They are also used for switching to user mode or starting a program in the proper location based on new or saved state.

Schedulers use algorithms known as scheduling disciplines to distribute resources among processes be they simultaneous or asynchronous. Scheduling algorithms minimize the amount of resource starvation to ensure that processes aren’t continually denied resources.

“Like, if this thing is just beating the hard disk, you’ve gotta stop for a minute and let something else hit the hard disk.”

First in, first out (first come, first served) is the simplest of these algorithms. Processes are queued based on the order they arrived. The overhead of scheduling is minimized because context switches only happen at the termination of a process. Turnaround, response, and wait times depend on when processes arrive. Because every process goes to completion there is no risk of starvation. Earliest deadline is an algorithm that waits until a process terminates then searches for the next one closest to it’s deadline.

In shortest remaining first the scheduler rearranges the queue of waiting processes based on the least amount of time estimated to complete the process. The scheduling algorithm will need to have advanced knowledge about the time required for a process. The current process may be interrupted if a shorter process arrives in the queue while it is running. Longer processes will have longer wait and response times and may suffer from starvation if shorter processes are continually being added to the queue.

The operating system may assign a fixed priority rank to each process it receives such as in fixed priority pre-emptive scheduling. Processes are then reordered in the queue by their priority rank. A lower priority task may suffer longer waits and starvation because of incoming higher priority tasks.

With round-robin scheduling a fixed amount of time is allocated for running processes and then the scheduler cycles through each process for the amount of time set. This requires a lot of overhead because of the constant context switching. Shorter jobs will be completed faster than in FIFO and longer one faster than shortest remaining first. The average response time is good but individual wait times will vary depending on the number of processes in the queue.

Multilevel queue scheduling is used when processors can be divided into different groups such as foreground and background processing.

42:30 Event Loop

The message dispatcher or event loop makes requests to an event provider and then it calls the relevant event handler. Event loops are one way of implementing message passing. Message pumps move messages from the message queue in the underlying OS into the program for processing. Main loops are top level event loops that control the flow of the entire program. Most modern application have a main loop that is only entered if there is something to be processed.

Unix and similarly Linux treat everything as a file so they have file-based event looping. The file I/O controls all reading, writing, and communication, both internal and network. However, asynchronous events or signals are not handled by the file interface. They are received through signal handlers. These are small pieces of code that run with the rest of the task is suspended.

Event driven programming is a paradigm that allows control of the application flow to be determined by user actions or inputs. These actions can be user input such as mouse clicks or key presses, input from sensor such as in IoT devices, or messages from other programs such as when building an API. This is the dominant paradigm in designing graphical user interfaces (GUI). It is also used in device drivers such as USB. It involves a main loop that will listen for events the trigger a callback when one is detected. With imbedded systems this is achieved via hardware interrupts instead of a main loop. This works best with high-level languages that have constructs like await and closures. Event listening may be handled by the framework where checking for events in the main loop is common among application use.

“For instance, Dot Net Winforms, that’s built to handle that stuff”

There are a few alternative designs that contrast the event loop design. Historically programs would simply run once then terminate such as event or command-line driven programs. Parameters were set up ahead of time and passed into the program. This design did not allow for user interaction within the program. Alternatively menu-driven designs may still have a main loop, but are not exactly event driven. Users are presented with options that narrow down the task to be performed. This design allows for some interaction with users.

47:00 Related Design Patterns

The Template Method Pattern is a behavioral design pattern that provides a template for building an algorithm. It is usually implemented as a base or abstract class with shared code and constants. It ensures the controlling algorithm is always followed. Variables are given default values or implementations. These abstract classes have concrete implementations. They fill in empty or variable parts of the template. Algorithms vary from implementation to implementation. High level code doesn’t determine what algorithms to run but instead a lower level algorithm is selected at run-time.

Service Locator Pattern is a design pattern that encapsulates the processes for getting a service via an abstraction layer. I uses a central registry or service locator that returns information needed to perform a given task. All the dependencies are listed at the beginning of the application design. This simplifies component-based applications. Dependency injection would then be a more complicated way to connecting objects. The biggest concern with tis pattern is that it obscures dependencies. This makes the registry a ‘black box’ to the rest of the system. The code is harder to test and maintain because errors occur at run-time instead of compile time.

The Strategy Pattern is another behavioral design pattern that lets the code select an algorithm at run-time. Instead of implementing algorithms directly the code gets instructions at run-time with a set of algorithms to use. This lets algorithms be independent of the clients that use them. It does so by storing references to code in a data structure. The code is the accessed through the pointer when needed. It uses composition instead of inheritance. In the pattern, class behavior isn’t inherited but encapsulated in interfaces. Behaviors are defined in interfaces and their implementation.

IoTease: Project

Alexa Presentation Language

 

Amazon announced today that hey have build a new language for developing Alexa skills. It’s called the Alexa Presentation Language (APL). It will allow developers to build interactive experiences with the Alexa platform. This include graphics, images, and videos that can be played on devices such as the Echo Show, Fire TV, etc. You can also create content for devices using the Alexa Smart Screen or TV Device SDK. The language is built for working with voice commands and has several built in components including image/text views, pagers, layouts, and conditional expressions.

Tricks of the Trade

In your job or when consulting, your boss is not looking for a tightly coupled component. They want to tell you what to do and it gets done without needing to know the implementation details. You do not need to list best practices such as refactoring as a line item for the business people to see. This is where abstraction applies in the real world. Give your boss an interface so you can talk about the things they care about, business items, and not the technical details or implementations. Decouple your boss from the way you work.

Tagged with: , , , , , , , , ,