Functional Programming For Dummies book cover

Functional Programming For Dummies

By: John Paul Mueller Published: 02-06-2019

Your guide to the functional programming paradigm 

Functional programming mainly sees use in math computations, including those used in Artificial Intelligence and gaming. This programming paradigm makes algorithms used for math calculations easier to understand and provides a concise method of coding algorithms by people who aren't developers. Current books on the market have a significant learning curve because they're written for developers, by developers—until now. 

Functional Programming for Dummies explores the differences between the pure (as represented by the Haskell language) and impure (as represented by the Python language) approaches to functional programming for readers just like you. The pure approach is best suited to researchers who have no desire to create production code but do need to test algorithms fully and demonstrate their usefulness to peers. The impure approach is best suited to production environments because it's possible to mix coding paradigms in a single application to produce a result more quickly. Functional Programming For Dummies uses this two-pronged approach to give you an all-in-one approach to a coding methodology that can otherwise be hard to grasp.

  • Learn pure and impure when it comes to coding
  • Dive into the processes that most functional programmers use to derive, analyze and prove the worth of algorithms
  • Benefit from examples that are provided in both Python and Haskell
  • Glean the expertise of an expert author who has written some of the market-leading programming books to date

If you’re ready to massage data to understand how things work in new ways, you’ve come to the right place!

Articles From Functional Programming For Dummies

page 1
page 2
11 results
11 results
Functional Programming: Creating Lambda Functions in Haskell and Python

Article / Updated 05-08-2019

Functional programming is a paradigm, which means that it doesn’t have an implementation. The basis of functional programming is lambda calculus, which is actually a math abstraction. Consequently, when you want to perform tasks by using the functional programming paradigm, you're really looking for a programming language that implements functional programming in a manner that meets your needs. Two languages that are ideal for functional programming are Haskell and Python. Creating lambda functions in Haskell You can create functions in Haskell. For example, if you want to create a curried function to add two numbers together, you might use add x y = x + y. This form of code creates a definite function. However, you can also create anonymous functions in Haskell that rely on lambda calculus to perform a task. The difference is that the function actually is anonymous — has no name — and you assign it to a variable. To see how this process works, open a copy of the Haskell interpreter and type the following code: add = \x -> \y -> x + y Notice how lambda functions rely on the backslash for each variable declaration and the map (->) symbol to show how the variables are mapped to an expression. You now have a lambda function to use in Haskell. To test it, type add 1 2 and press Enter. The output is 3 as expected. Obviously, this use of lambda functions isn't all that impressive. You could use the function form without problem. However, lambda functions do come in handy for other uses. For example, you can create specially defined operators. The following code creates a new operator, +=: (+=) = \x -> \y -> x + y To test this code, you type 1+=2 and press Enter. Again, the output is 3, as you might expect. Haskell does allow a shortcut method for defining lambda functions. You can create this same operator using the following code: (+=) = \x y -> x + y Creating lambda functions in Python As with the Haskell function, you can also create a lambda function version of the add function. When creating a lambda function in Python, you define the function anonymously and rely on the lambda keyword, as shown here: add = lambda x, y: x + y Notice that this particular example assigns the function to a variable. However, you can use a lambda function anywhere that Python expects to see an expression or a function reference. You use this function much as you would any other function. Type add(1, 2), execute the code, and you see 3 as output. If you want to follow a more precise lambda function formulation, you can create the function like this: add = lambda x: lambda y: x + y In this case, you see how the lambda sequence should work more clearly, but it's extra work. To use this function, you type add(1)(2) and execute the code. Python applies the values as you might think, and the code outputs a value of 3. Python doesn’t allow you to create new operators, but you can override existing operators; this article tells you how. However, here you create a new use for the letter X using a lambda function. To begin this process, you must install the Infix module by opening the Anaconda Prompt, typing pip install infix at the command prompt, and pressing Enter. After a few moments, pip will tell you that it has installed Infix for you. The following code will let you use the letter X to multiply two values: from infix import mul_infix as Infix X = Infix(lambda x, y: x * y) 5 *X* 6 X(5, 6) The first statement imports mul_infix as Infix. You have access to a number of infix methods, but this example uses this particular one. The website pypiu.org discusses the other forms of infix at your disposal. The second statement sets X as the infix function using a lambda expression. The manner in which Infix works allows you to use X as either an operator, as shown by 5 *X* 6 or a regular function, as shown by X(5, 6). When used as an operator, you must surround X with the multiplication operator, *. If you were to use shif_infix instead, you would use the shift operators (<< and >>) around the lambda function that you define as the operator.

View Article
Understanding the Rules of Lambda Calculus for Functional Programming

Article / Updated 05-08-2019

You use three different operations to perform tasks using lambda calculus: creating functions to pass as variables; binding a variable to the expression (abstraction); and applying a function to an argument. The following describes all three operations that functional programmers can view as rules that govern all aspects of working with lambda calculus. Working with variables in lambda calculus When considering variables in lambda calculus, the variable is a placeholder (in the mathematical sense) and not a container for values (in the programming sense). Any variable, x, y, or z, (or whatever identifier you choose to use) is a lambda term. Variables provide the basis of the inductive (the inference of general laws from specific instances) definition of lambda terms. To put this in easier-to-understand terms, if you always leave for work at 7:00 a.m. and are always on time, inductive reasoning says that you will always be on time as long as you leave by 7:00 a.m. Induction in math relies on two cases to prove a property. For example, a common proof is a property that holds for all natural numbers. The base (or basis) case makes an assumption using a particular number, usually 0. The inductive case, also called the inductive step, proves that if the property holds for the first natural number (n), it must also hold for the next natural number (n + 1). Variables may be untyped or typed. Typing isn’t quite the same in this case because types are for other programming paradigms; the use of typing doesn’t actually indicate a kind of data. Rather, it defines how to interpret the lambda calculus. Untyped variables in lambda calculus The original version of Church’s lambda calculus has gone through a number of revisions as the result of input by other mathematicians. The first such revision came as the result of input from Stephen Kleene and J. B. Rosser in 1935 in the form of the Kleene–Rosser paradox. (The article on Quora provides a basic description of this issue.) A problem exists in the way that logic worked in the original version of lambda calculus, and Church fixed this problem in a succeeding version by removing restrictions on the kind of input that a function can receive. In other words, a function has no type requirement. The advantage of untyped lambda calculus is its greater flexibility; you can do more with it. However, the lack of type also means that untyped lambda calculus is nonterminating. In some cases, you must use typed lambda calculus to obtain a definitive answer to a problem. Simply-typed variables in lambda calculus Church created simply-typed lambda calculus in 1940 to address a number of issues in untyped lambda calculus, the most important of which is an issue of paradoxes where β-reduction can’t terminate. In addition, the use of simple typing provides a means for strongly proving the calculus. Using application in lambda calculus The act of applying one thing to another seems simple enough. When you apply peanut butter to toast, you get a peanut butter sandwich. Application in lambda calculus is almost the same thing. If M and N are lambda terms, the combination MN is also a lambda term. In this case, M generally refers to a function and N generally refers to an input to that function, so you often see these terms written as (M)N. The input, N, is applied to the function, M. Because the purpose of the parentheses is to define how to apply terms, it’s correct to refer to the pair of parentheses as the apply operator. Understanding that application infers nesting is essential. In addition, because lambda calculus uses only functions, inputs are functions. Consequently, saying M2(M1N) would be the same as saying that the function M1 is applied as input to M2 and that N is applied as input to M1. In some cases, you see lambda calculus written without the parentheses. For example, you might see EFG as three lambda terms. However, lambda calculus is left associated by default, which means that when you see EFG, what the statement is really telling you is that E is applied to F and F is applied to G, or ((E)F)G. Using the parentheses tends to avoid confusion. Also, be aware that the associative math rule doesn’t apply in this case: ((E)F)G is not equivalent to E(F(G)). To understand the idea of application better, consider the following pseudocode: inc(x) = x + 1 All this code means is that to increment x, you add 1 to its value. The lambda calculus form of the same pseudocode written as an anonymous function looks like this: (x) -> x + 1 You read this statement as saying that the variable x is mapped to x + 1. However, say that you have a function that requires two inputs, like this: square_sum(x, y) = (x2 + y2) The lambda calculus form of the same function written in anonymous form looks like this: (x, y) -> x2 + y2 This statement is read as saying that the tuple (x, y) is mapped to x2 + y2. However, as previously mentioned, lambda calculus allows functions to have just one input, and this one has two. To properly apply the functions and inputs, the code would actually need to look like this: x -> (y -> x2 + y2) At this point, x and y are mapped separately. The transitioning of the code so that each function has only one argument is called currying. This transition isn't precisely how you see lambda calculus written, but it does help explain some of the underlying mechanisms. Using abstraction in lambda calculus The term abstraction derives from the creation of general rules and concepts based on the use and classification of specific examples. The creation of general rules tends to simplify a problem. For example, you know that a computer stores data in memory, but you don’t necessarily understand the underlying hardware processes that allow the management of data to take place. The abstraction provided by data storage rules hides the complexity of viewing this process each time it occurs. The following information describes how abstraction works for both untyped and typed lambda calculus. Abstracting untyped lambda calculus In lambda calculus, when E is a lambda term and x is a variable, λx.E is a lambda term. An abstraction is a definition of a function, but doesn’t invoke the function. To invoke the function, you must apply it. f(x) = x + 1 The lambda abstraction for this function is λx.x + 1 Remember that lambda calculus has no concept of a variable declaration. Consequently, when abstracting a function such as f(x) = x2 + y2 to read λx.x2 + y2 the variable y is considered a function that isn’t yet defined, not a variable declaration. To complete the abstraction, you would create the following: λx.(λy.x2 + y2) Abstracting simply-typed calculus The abstraction process for simply-typed lambda calculus follows the same pattern as described for untyped lambda calculus, except that you now need to add type. In this case, the term type doesn’t refer to string, integer, or Boolean — the types used by other programming paradigms. Rather, type refers to the mathematical definition of the function’s domain (the set of outputs that the function will provide based on its defined argument values) and range (the codomain or image of the function), which is represented by A -> B. All that this talk about type really means is that the function can now accept only inputs that provide the correct arguments, and it can provide outputs of only certain arguments as well. Alonzo Church originally introduced the concept of simply-typed calculus as a simplification of typed calculus to avoid the paradoxical uses of untyped lambda calculus. A number of lambda calculus extensions also rely on simple typing including: products, coproducts, natural numbers (System T), and some types of recursion (such as Programming Computable Functions, or PCF). The important issue discussed here is how to represent a typed form of lambda calculus statement. For this task, you use the colon (:) to display the expression or variable on the left and the type on the right. For example, referring to the increment abstraction shown above, you include the type, as shown here: λx:ν.x + 1 In this case, the parameter x has a type of ν (nu), which represents natural numbers. This representation doesn't tell the output type of the function, but because + 1 would result in a natural number output as well, it’s easy to make the required assumption. This is the Church style of notation. However, in many cases you need to define the type of the function as a whole, which requires the Curry-style notation. Here is the alternative method: (λx.x + 1):ν -> ν Moving the type definition outside means that the example now defines type for the function as a whole, rather than for x. You infer that x is of type ν because the function parameters require it. When working with multiparameter inputs, you must curry the function as shown before. In this case, to assign natural numbers as the type for the sum square function, you might show it like this: λx:ν.(λy:ν.x2 + y2) Note the placement of the type information after each parameter. You can also define the function as a whole, like this: (λx.(λy.x2 + y2)):ν -> ν -> ν Each parameter appears separately, followed by the output type. A great deal more exists to discover about typing, but this discussion gives you what you need to get started without adding complexity. The article at Goodmath.org provides additional insights that you may find helpful. When working with particular languages, you may see the type indicated directly, rather than indirectly using Greek letters. For example, when working with a language that supports the int data type, you may see int used directly, rather than the less direct form of ν that's shown in the previous examples. For example, the following code shows an int alternative to the λx:ν.x + 1 code shown earlier: λx:int.x + 1

View Article
10 Occupations for Functional Programmers

Article / Updated 05-08-2019

For many people, the reason to learn a new language or a new programming paradigm focuses on the ability to obtain gainful employment. Yes, functional programmers also have the joy of learning something new. However, to be practical, the something new must also provide a tangible result. The purpose of the information here is to help you see the way to a new occupation that builds on the skills you discover through the functional programming paradigm. Traditional functional programming development When asked about functional programming occupations, a number of developers who use functional programming in their jobs actually started with a traditional job and then applied functional programming methodologies to it. When coworkers saw that these developers were writing cleaner code that executed faster, they started adopting functional programming methodologies as well. Theoretically, this approach can apply to any language, but it helps to use a pure language (such as Haskell) when you can, or an impure language (such as Python) when you can’t. Of course, you’ll encounter naysayers who will tell you that functional programming applies only to advanced developers who are already working as programmers, but if that were the case, a person wouldn’t have a place to start. Some organization will be willing to experiment with functional programming and continue to rely on it after the developers using it demonstrate positive results. The problem is how to find such an organization. You can look online at places such as Indeed.com, which offers listings for the languages that work best for functional programming in traditional environments. At the time of this writing, Indeed.com had 175 Haskell job listings alone. Jobs for Python programmers with functional programming experience topped 6,020. A few websites deal specifically with functional programming jobs. For example, Functional Jobs provides an interesting list of occupations that you might want to try. The benefit of these sites is that the listings are extremely targeted, so you know you’ll actually perform functional programming. A disadvantage is that the sites tend to be less popular than mainstream sites, so you may not see the variety of jobs that you were expecting. New functional programming development With the rise of online shopping, informational, and other kinds of sites, you can bet that a lot of new development is also going on. In addition, traditional organizations will require support for new strategies, such as using Amazon web Services (AWS) to reduce costs (see AWS For Admins For Dummies and AWS For Developers For Dummies, by John Paul Mueller [Wiley], for additional information on AWS). Any organization that wants to use serverless computing, such as AWS Lambda, will likely need developers who are conversant in functional programming strategies. Consequently, the investment in learning the functional programming paradigm can pay off in the form of finding an interesting job using new technologies rather than spending hour after boring hour updating ancient COBOL code on a mainframe. When going the new development route, be sure you understand the requirements for your job and have any required certifications. For example, when working with AWS, your organization may require that you have an AWS Certified Developer (or other) certification. Of course, other cloud organizations exist, such as Microsoft Azure and Google Cloud. The article at zdnet.com tells you about the relative strengths of each of these offerings. Creating your own functional programming development Many developers started in their home or garage tinkering with things just to see what would happen. Becoming fascinated with code — its essence — is part of turning development into a passion rather than just a job. Some of the richest, best-known people in the world started out as developer entrepreneurs (think people like Jeff Bezos and Bill Gates). In fact, you can find articles online, such as the one at Skills Crush, that tell precisely why developers make such great entrepreneurs. The advantage of being your own boss is that you do things your way, make your mark on the world, and create a new vision of what software can do. Yes, sometimes you get the money, too, but more developers have found that they become successful only after they figure out that creating your own development environment is all about business — that is, offering a service that someone else will buy. Articles, such as the ones at hackernoon.com and Codeburst, tell you how to make the transition from developer to entrepreneur. The functional connection comes into play when you start to consider that the functional programming paradigm is somewhat new. Businesses are starting to pay attention to functional programming because of articles such as this InfoWorld offering. When businesses find out that functional programming not only creates better code but also makes developers more productive, they begin to see a financial reason to employ consultants (that’s you) to move their organizations toward the functional programming paradigm. Find functional programming jobs at forward-thinking businesses Many businesses are already using functional programming methodologies. In some cases, these businesses started with functional programming, but in more cases the business transitioned. One such business is Jet.com, which offers online shopping that's like a mix of Amazon.com and Costco. You can read about this particular business at Kiplinger.com. The thing that will interest you is that Jet.com relies on F#, a multiparadigm language similar to Python from an environmental perspective, to meet its needs. Most languages want you to know that real companies are using them to do something useful. Consequently, you can find a site that provides a list of these organizations for Haskell and for Python. Languages that are more popular will also sprout a lot of articles. For example, the article at https://realpython.com/world-class-companies-using-python/ supplies a list of well-known organizations that use Python. You need to exercise care in applying to these organizations, however, because you never know whether you’ll actually work with your programming language of choice (or whether you’ll work as a developer at all). Doing something really interesting as a functional programmer Some people want to go to work, do a job for eight to ten hours, and then come home and forget about work. This information isn’t for you. On the flip side, some people want to make their mark on the world and light it on fire. This won’t work for you, either. This section is for those people who fall between these two extremes: Those who don't mind working a few extra hours as long as the work is interesting and meaningful, and they don’t have to manage any business details. After all, the fun of functional programming is writing the code and figuring out interesting ways to make data jump through all sorts of hoops. That’s where job sites like Functional Works come into play. Sites such as Functional Works search for potential candidates for large organizations, such as Google, Facebook, Two Sigma, and Spotify. The jobs are listed by category in most cases. Be prepared to read for a while because the sites generally describe the jobs in detail. That’s because these organizations want to be sure that you know what you’re getting into, and they want to find the best possible fit. These sites often offer articles, such as “Compose Tetras.” The articles are interesting because they give you a better perspective of what the site is about, and why a company would choose this site, rather than another one, to find people. You learn more about functional programming, as well. Developing deep learning applications with functional programming One of the most interesting and widely discussed subsets of Artificial Intelligence (AI) today is that of deep learning, in which algorithms use huge amounts of data to discover patterns and then use those patterns to perform data-based tasks. You might see the output as being voice recognition or robotics, but the computer sees data — lots and lots of data. Oddly enough, functional programming techniques make creating deep learning applications significantly easier. This article is interesting because it looks at a number of languages that are important in the world of functional programming. You can learn more about the world of AI in AI For Dummies, by John Paul Mueller and Luca Massaron (Wiley), and the world of machine learning in Machine Learning For Dummies, also by John Paul Mueller and Luca Massaron (Wiley). Writing low-level code with functional programming You might not initially think about using functional programming methods to write low-level code, but the orderly nature of functional programming languages makes them perfect for this task. Here are a few examples: Compilers and interpreters: These applications (and that’s what they are) work through many stages of processing, relying on tree-like structures to turn application code into a running application. Recursion makes processing tree-like structures easy, and functional languages excel at recursion. The Compcert C Compiler is one example of this use. Concurrent and parallel programming: Creating an environment in which application code executes concurrently, in parallel, is an incredibly hard task for most programming languages, but functional languages handle this task with ease. You could easily write a host environment using a functional language for applications written in other languages. Security: The immutable nature of functional code makes it inherently safe. Creating the security features of an operating system or application using functional code significantly reduces the chance that the system will be hacked. You can more easily address a wide range of low-level coding applications in a functional language because of how functional languages work. A problem can arise, however, when resources are tight because functional languages can require more resources than other languages. In addition, if you need real-time performance, a functional language may not provide the ultimate in speed. Helping others in the health care arena with functional programming The health care field is leading the charge in creating new jobs, so your new functional programmer job might just find you in the health care industry. If you regard working in the medical industry as possibly the most boring job in the world, read this ad. The possibilities might be more interesting than you think. Oddly enough, many of these ads, the one referenced in this paragraph included, specifically require you to have functional programming experience. This particular job also specifies that the job environment is relaxed and the company expects you to be innovative in your approach to solving problems — which is hardly a formula for a boring job. Use your functional programming skills to work as a data scientist As a data scientist, you're more likely to use the functional programming features of Python than to adapt a wholly functional approach by using a language such as Haskell. According to this article, Python is still the top language for data science. Articles such as the one at kdnuggets.com seem to question just how much penetration functional programming has made in the data science community; however, such penetration exists. This discussion details good reasons for data scientists to use functional programming, including better ways to implement parallel programming. When you consider that a data scientist could rely on a GPU with up to 5,120 cores (such as the NVidia Titan V), parallel programming takes on a whole new meaning. Of course, data science involves more than just analyzing huge datasets. The act of cleaning the data and making the various data sources work together is extremely time consuming, especially in getting the various data types aligned. However, even in this regard, using a functional language can be an immense help. Knowing a functional language gives you an edge as a data scientist — one that could lead to advancement or more interesting projects that others without your edge will miss. The book Python For Data Science For Dummies, by John Paul Mueller and Luca Massaron (Wiley), provides significant insights into just how you can use Python to your advantage in data science, and implementing functional programming techniques in Python is just another step beyond. Research the next big thing as a functional programmer Often you’ll find a query for someone interested in working as a researcher on a job site such as Indeed.com. In some cases, the listing will specifically state that you need functional programming skills. This requirement exists because working with huge datasets to determine whether a particular process is possible or an experiment succeeded, or to get the results of the latest study, all demand strict data processing. By employing functional languages, you can to perform these tasks quickly using parallel processing. The strict typing and immutable nature of functional languages are a plus as well. Oddly enough, the favored languages for research, such as Clojure, are also the highest-paying languages, according to sites such as TechRepublic. Consequently, if you want an interesting job in an incredibly competitive field with high pay, being a researcher with functional programming skills may be just what you’re looking for.

View Article
10 Must-Have Haskell Libraries for Functional Programming

Article / Updated 05-08-2019

Haskell supports a broad range of libraries, which is why it’s such a good product to use. If you are headed down the path as a functional programmer, you should check out the rather lengthy list of available Haskell libraries. Chances are that you’ll find a Haskell library to meet almost any need in functional programming. The problem is figuring out precisely which library to use and, unfortunately, the Hackage site doesn’t really help much. The associated short descriptions are generally enough to get you pointed in the right direction, but experimentation is the only real way to determine whether a library will meet your needs. In addition, you should seek online reviews of the various libraries before you begin using them. Of course, that’s part of the pleasure of development: discovering new tools to meet specific functional programming needs and then testing them yourself. Haskell library #1: binary To store certain kinds of data, you must be able to serialize it — that is, change it into a format that you can store on disk or transfer over a network to another machine. Serialization takes complex data structures and data objects and turns them into a series of bits that an application can later reconstitute into the original structure or object using deserialization. The point is that the data can’t travel in its original form. The binary library enables an application to serialize binary data of the sort used for all sorts of purposes, including both sound and graphics files. It works on lazy byte strings, which can provide a performance advantage as long as the byte strings are error free and the code is well behaved. This particular library's fast speed is why it's so helpful for real-time binary data needs. According to the originator, you can perform serialization and deserialization tasks at speeds approaching 1 Gbps. According to the discussion at Superuser.com, a 1 Gb/sec data rate is more than sufficient to meet the 22 Mbps transfer rate requirement for 1080p video used for many purposes today. This transfer rate might not be good enough for 4K video data rates. If you find that binary doesn’t quite meet your video or audio processing needs, you can also try the cereal library. It provides many of the same features as binary, but uses a different coding strategy (strict versus lazy execution). You can read a short discussion of the differences on Stack Overflow’s site. GHC VERSION Most of the libraries you use with Haskell will specify a GHC version. The version number tells you the requirements for the Haskell environment; the library won’t work with an older GHC version. In most cases, you want to keep your copy of Haskell current to ensure that the libraries you want to use will work with it. Also, note that many library descriptions will include support requirements in addition to the version number. Often, you must perform GHC upgrades to obtain the required support or import other libraries. Make sure to always understand the GHC requirements before using a library or assuming that the library isn’t working properly. Haskell library #2: Hascore The Hascore library gives you the means to describe music. You use this library to create, analyze, and manipulate music in various ways. An interesting aspect of this particular library is that it helps you see music in a new way. It also enables people who might not ordinarily be able to work with music express themselves. The site shows how the library makes lets you visualize music as a kind of math expression. Of course, some musicians probably think that viewing music as a kind of math is to miss the point. However, you can find a wealth of sites that fully embrace the math in music, such as the American Mathematical Society (AMS) page. Some sites, such as Scientific American even express the idea that knowing music can help someone understand math better, too. The point is that Hascore enables you to experience music in a new way through Haskell application programming. You can find other music and sound oriented libraries. Haskell library #3: vect Computer graphics in computers are based heavily in math. Haskell provides a wide variety of suitable math libraries for graphic manipulation, but vect represents one of the better choices because it’s relatively fast and doesn’t get mired in detail. Plus, you can find it used in existing applications such as the LambdaCube engine, which helps you to render advanced graphics on newer hardware. If your main interest in a graphics library is to experiment with relatively simple output, vect does come with OpenGL support, including projective four-dimensional operations and quaternions. You must load the support separately, but the support is fully integrated into the library. Haskell library #4: vector All sorts of programming tasks revolve around the use of arrays. The immutable built-in list type is a linked-list configuration, which means that it can use memory inefficiently and not process data requests at a speed that will work for your application. In addition, you can’t pass a linked list to other languages, which may be a requirement when working in a graphics or other scenario in which high-speed interaction with other languages is a requirement. The vector library solves these and many other issues for which an array will work better than a linked list. The vector library not only includes a wealth of features for managing data but also provides both mutable and immutable forms. Yes, using mutable data objects is the bane of functional programming, but sometimes you need to bend the rules a bit to process data fast enough to have it available when needed. Because of the nature of this particular library, you should see the need for eager execution (in place of the lazy execution that Haskell normally relies on) as essential. The use of eager processing also ensures that no potential for data loss exists and that cache issues are fewer. Haskell library #5: aeson A great many data stores today use JavaScript Object Notation (JSON) as a format. In fact, you can find JSON used in places you might not initially think about. For example, Amazon web Services (AWS), among others, uses JSON to do everything from creating processing rules to creating configuration files. With this need in mind, you need a library to manage JSON data in Haskell, which is where aeson comes into play. This library provides everything needed to create, modify, and parse JSON data in a Haskell application. Haskell library #6: attoparsec Mixed-format data files can present problems. For example, an HTML page can contain both ASCII and binary data. The attoparsec library provides you with the means for parsing these complex data files and extracting the data you need from them. The actual performance of this particular library depends on how you write your parser and whether you use lazy evaluation. However, according to a number of sources, you should be able to achieve relatively high parsing speeds using this library. One of the more interesting ways to use attoparsec is to parse log files. The article at School of Haskell discusses how to use the library for this particular task. The article also gives an example of what writing a parser involves. Before you decide to use this particular library, you should spend time with a few tutorials of this type to ensure that you understand the parser creation process. Haskell library #7: bytestring You use the bytestring library to interact with binary data, such as network packets. One of the best things about using bytestring is that it allows you to interact with the data using the same features as Haskell lists. Consequently, the learning curve is less steep than you might imagine and your code is easier to explain to others. The library is also optimized for high performance use, so it should meet any speed requirements for your application. Unlike many other parts of Haskell, bytestring also enables you to interact with data in the manner you actually need. With this in mind, you can use one of two forms of bytestring calls: Strict: The library retains the data in one huge array, which may not use resources efficiently. However, this approach does let you to interact with other APIs and other languages. You can pass the binary data without concern that the data will appear fragmented to the recipient. Lazy: The library uses smaller strict arrays to hold the data. This approach uses resources more efficiently and can speed data transfers. You use the lazy approach when performing tasks such as streaming data online. The bytestring library also provides support for a number of data presentations to make it easier to interact with the data in a convenient manner. In addition, you can mix binary and character data as needed. A Builder module also lets you easily create byte strings using simple concatenation. Haskell library #8: stringsearch Manipulating strings can be difficult, but you're aided by the fact that the data you manipulate is in human-readable form for the most part. When it comes to byte strings, the patterns are significantly harder to see, and precision often becomes more critical because of the manner in which applications use byte strings. The stringsearch library enables you to perform the following tasks on byte strings quite quickly: Search for particular byte sequences Break the strings into pieces using specific markers Replace specific byte sequences with new sequences This library will work with both strict and lazy byte strings. Consequently, it makes a good addition to libraries such as bytestring, which support both forms of bytestring calls. Learn more about how this library performs its various tasks. Haskell library #9: text There are times when the text-processing capabilities of Haskell leave a lot to be desired. The text library helps you to perform a wide range of tasks using text in various forms, including Unicode. You can encode or decode text as needed to meet the various Unicode Transformation Format (UTF) standards. As helpful as it is to have a library for managing Unicode, the text library does a lot more with respect to text manipulation. For one thing, it can help you with internationalization issues, such as proper capitalization of words in strings. This library also works with byte strings in both a strict and lazy manner. Providing this functionality means that the text library can help you in streaming situations to perform text conversions quickly. Haskell library #10: moo The moo library provides Genetic Algorithm (GA) functionality for Haskell. GA is often used to perform various kinds of optimizations and to solve search problems using techniques found in nature (natural selection). Yes, GA also helps in understanding physical or natural environments or objects, as you can see in the this tutorial. The point is that it relies on evolutionary theory, one of the tenets of Artificial Intelligence (AI). This library supports a number of GA variants out of the box: Binary using bit-strings: Binary and Gray encoding Point mutation One-point, two-point, and uniform crossover Continuous using a sequence of real values: Gaussian mutation BLX- α, UNDX, and SBX crossover You can also create other variants through coding. These potential variants include Permutation Tree Hybrid encodings, which would require customizations The readme for this library tells you about other moo features and describes how they relate to the two out-of-the-box GA variants. Of course, the variants you code will have different features depending on your requirements. The single example provided with the readme shows how to minimize Beale’s function (check out the description of this function). You may be surprised at how few lines of code this particular example requires.

View Article
Manipulating Dataset Entries for Functional Programming

Article / Updated 05-08-2019

You're unlikely to find a common dataset used with Python that doesn't provide relatively good documentation. You need to find the documentation online if you want the full story about how the dataset is put together, what purpose it serves, and who originated it, as well as any needed statistics you need to suit your functional programming goals. Fortunately, you can employ a few tricks to interact with a dataset without resorting to major online research. Determining the dataset content for functional programming Once you load or fetch existing datasets from specific sources, you can apply them to your functional programming goals. These datasets generally have specific characteristics that you can discover online at places like Sci-kit resources for the Boston house-prices dataset. However, you can also use the dir() function to learn about dataset content. When you use dir(Boston) with the previously created Boston house-prices dataset, you discover that it contains DESCR, data, feature_names, and target properties. Here is a short description of each property: DESCR: Text that describes the dataset content and some of the information you need to use it effectively data: The content of the dataset in the form of values used for analysis purposes feature_names: The names of the various attributes in the order in which they appear in data target: An array of values used with data to perform various kinds of analysis The print(Boston.DESCR) function displays a wealth of information about the Boston house-prices dataset, including the names of attributes that you can use to interact with the data. Check out the results of these queries. The information that the datasets contain can have significant commonality. For example, if you use dir(data) for the Olivetti faces dataset example described earlier, you find that it provides access to DESCR, data, images, and target properties. As with the Boston house-prices dataset, DESCR gives you a description of the Olivetti faces dataset, which you can use for things like accessing particular attributes. By knowing the names of common properties and understanding how to use them, you can discover all you need to know about a common dataset in most cases without resorting to any online resource. In this case, you'd use print(data.DESCR) to obtain a description of the Olivetti faces dataset. Also, some of the description data contains links to sites where you can learn more information. Using the dataset sample code for functional programming The online sources are important because they provide you with access to sample code, in addition to information about the dataset. For example, the Boston house-prices site provides access to six examples, one of which is the Gradient Boosting Regression example. Discovering how others access these datasets can help you build your own code. Of course, the dataset doesn’t limit you to the uses shown by these examples; the data is available for any use you might have for it. Creating a DataFrame The common datasets are in a form that allows various types of analysis, as shown by the examples provided on the sites that describe them. However, you might not want to work with the dataset in that manner; instead, you may want something that looks a bit more like a database table. Fortunately, you can use the pandas library to perform the conversion in a manner that makes using the datasets in other ways easy. Using the Boston house-prices dataset as an example, the following code performs the required conversion: import pandas as pd BostonTable = pd.DataFrame(Boston.data, columns=Boston.feature_names) If you want to include the target values with the DataFrame, you must also execute: BostonTable['target'] = Boston.target. However, here you don’t use target data. Accessing specific records for functional programming If you were to do a dir() command against a DataFrame, you would find that it provides you with an overwhelming number of functions to try. The documentation at panda supplies a good overview of what's possible (which includes all the usual database-specific tasks specified by CRUD). The following example code shows how to perform a query against a pandas DataFrame. In this case, the code selects only those housing areas where the crime rate is below 0.02 per capita. CRIMTable = BostonTable.query('CRIM < 0.02') print(CRIMTable.count()['CRIM']) The output shows that only 17 records match the criteria. The count() function enables the application to count the records in the resulting CRIMTable. The index, ['CRIM'], selects just one of the available attributes (because every column is likely to have the same values). You can display all these records with all of the attributes, but you may want to see only the number of rooms and the average house age for the affected areas. The following code shows how to display just the attributes you actually need: print(CRIMTable[['RM', 'AGE']]) The image below shows the output from this code. As you can see, the houses vary between 5 and nearly 8 rooms in size. The age varies from almost 14 years to a little over 65 years. You might find it a bit hard to work with the unsorted data you see above. Fortunately, you do have access to the full range of common database features. If you want to sort the values by number of rooms, you use: print(CRIMTable[['RM', 'AGE']].sort_values('RM')) As an alternative, you can always choose to sort by average home age: print(CRIMTable[['RM', 'AGE']].sort_values('AGE'))

View Article
Working with Datasets in Functional Programming

Article / Updated 05-07-2019

Functional programming is much easier when you have a standard dataset. A standard dataset is one that provides a specific number of records using a specific format. It normally appears in the public domain and is used by professionals around the world for various sorts of tests. Professionals categorize these datasets in various ways for functional programming and other programming paradigms: Kinds of fields (features or attributes) Number of fields Number of records (cases) Complexity of data Task categories (such as classification) Missing values Data orientation (such as biology) Popularity Depending on where you search, you can find all sorts of other information, such as who donated the data and when. In some cases, old data may not reflect current social trends, making any testing you perform suspect. Some languages actually build the datasets into their downloadable source so that you don’t even have to do anything more than load them. Given the mandates of the General Data Protection Regulation (GDPR), you also need to exercise care in choosing any dataset that could potentially contain any individually identifiable information. Some people didn’t prepare datasets correctly in the past, and these datasets don’t quite meet the requirements. Fortunately, you have access to resources that can help you determine whether a dataset is acceptable, such as the dataset found on IBM. Of course, knowing what a standard dataset is and why you would use it are two different questions. Many developers want to test using their own custom data, which is prudent, but using a standard dataset does provide specific benefits, as listed here: Using common data for performance testing Reducing the risk of hidden data errors causing application crashes Comparing results with other developers Creating a baseline test for custom data testing later Verifying the adequacy of error-trapping code used for issues such as missing data Ensuring that graphs and plots appear as they should Saving time creating a test dataset Devising mock-ups for demo purposes that don’t compromise sensitive custom data A standardized common dataset is just a starting point, however. At some point, you need to verify that your own custom data works, but after verifying that the standard dataset works, you can do so with more confidence in the reliability of your application code. Perhaps the best reason to use one of these datasets is to reduce the time needed to locate and fix errors of various sorts — errors that might otherwise prove time consuming because you couldn’t be sure of the data that you’re using. Finding the Right Dataset to meet your functional programming goals Locating the right dataset for testing purposes is essential in functional programming. Fortunately, you don’t have to look very hard because some online sites provide you with everything needed to make a good decision. Locating general dataset information Datasets appear in a number of places online, and you can use many of them for general needs. An example of these sorts of datasets appears on the UCI Machine Learning Repository. As the table shows, the site categorizes the individual datasets so that you can find the dataset you need. More important, the table helps you understand the kinds of tasks that people normally employ the dataset to perform. If you want to know more about a particular dataset, you click its link and go to a page like the one you see below. You can determine whether a dataset will help you test certain application features, such as searching for and repairing missing values. The Number of web Hits field tells you how popular the dataset is, which can affect your ability to find others who have used the dataset for testing purposes. All this information is helpful in ensuring that you get the right dataset for a particular need; the goals include error detection, performance testing, and comparison with other applications of the same type. Even if your language provides easy access to these datasets, getting onto a site such as UCI Machine Learning Repository can help you understand which of these datasets will work best. In many cases, a language will provide access to the dataset and a brief description of dataset content — not a complete description of the sort you find on this site. Using library-specific datasets Depending on your programming language, you likely need to use a library to work with datasets in any meaningful way. One such library for Python is Scikit-learn. This is one of the more popular libraries because it contains such an extensive set of features and also provides the means for loading both internal and external datasets. You can obtain various kinds of datasets using Scikit-learn as follows: Toy datasets: Provides smaller datasets that you can use to test theories and basic coding. Image datasets: Includes datasets containing basic picture information that you can use for various kinds of graphic analysis. Generators: Defines randomly generated data based on the specifications you provide and the generator used. You can find generators for Classification and clustering Regression Manifold learning Decomposition Support Vector Machine (SVM) datasets: Provides access to both the svmlight and libsvm implementations, which include datasets that enable you to perform sparse dataset tasks. External load: Obtains datasets from external sources. Python provides access to a huge number of datasets, each of which is useful for a particular kind of analysis or comparison. When accessing an external dataset, you may have to rely on additional libraries: pandas.io: Provides access to common data formats that include CSV, Excel, JSON, and SQL. scipy.io: Obtains information from binary formats popular with the scientific community, including .mat and .arff files. numpy/routines.io: Loads columnar data into NumPy arrays. skimage.io: Loads images and videos into NumPy arrays. scipy.io.wavfile.read: Reads .wav file data into NumPy arrays. Other: Includes standard datasets that provide enough information for specific kinds of testing in a real-world manner. These datasets include (but are not limited to) Olivetti Faces and 20 Newsgroups Text. How to load a dataset for functional programming The fact that Python provides access to such a large variety of datasets might make you think that a common mechanism exists for loading them. Actually, you need a variety of techniques to load even common datasets. As the datasets become more esoteric, you need additional libraries and other techniques to get the job done. The following information doesn’t give you an exhaustive view of dataset loading in Python, but you do get a good overview of the process for commonly used datasets so that you can use these datasets within the functional programming environment. Working with toy datasets As previously mentioned, a toy dataset is one that contains a small amount of common data that you can use to test basic assumptions, functions, algorithms, and simple code. The toy datasets reside directly in Scikit-learn, so you don’t have to do anything special except call a function to use them. The following list provides a quick overview of the function used to import each of the toy datasets into your Python code: load_boston(): Regression analysis with the Boston house-prices dataset load_iris(): Classification with the iris dataset load_diabetes(): Regression with the diabetes dataset load_digits([n_class]): Classification with the digits dataset load_linnerud(): Multivariate regression using the linnerud dataset load_wine(): Classification with the wine dataset load_breast_cancer(): Classification with the Wisconsin breast cancer dataset Note that each of these functions begins with the word load. When you see this formulation in Python, the chances are good that the associated dataset is one of the Scikit-learn toy datasets. The technique for loading each of these datasets is the same across examples. The following example shows how to load the Boston house-prices dataset: from sklearn.datasets import load_boston Boston = load_boston() print(Boston.data.shape) To see how the code works, click Run Cell. The output from the print() call is (506, 13). You can see the output shown here. Creating custom data for functional programming The purpose of each of the data generator functions is to create randomly generated datasets that have specific attributes. For example, you can control the number of data points using the n_samples argument and use the centers argument to control how many groups the function creates within the dataset. Each of the calls starts with the word make. The kind of data depends on the function; for example, make_blobs() creates Gaussian blobs for clustering. The various functions reflect the kind of labeling provided: single label and multilabel. You can also choose bi-clustering, which allows clustering of both matrix rows and columns. Here's an example of creating custom data: from sklearn.datasets import make_blobs X, Y = make_blobs(n_samples=120, n_features=2, centers=4) print(X.shape) The output will tell you that you have indeed created an X object containing a dataset with two features and 120 cases for each feature. The Y object contains the color values for the cases. Seeing the data plotted using the following code is more interesting: import matplotlib.pyplot as plt %matplotlib inline plt.scatter(X[:, 0], X[:, 1], s=25, c=Y) plt.show() In this case, you tell Notebook to present the plot inline. The output is a scatter chart using the x-axis and y-axis contained in X. The c=Y argument tells scatter() to create the chart using the color values found in Y. Notice that you can clearly see the four clusters based on their color. Fetching common datasets for functional programming At some point, you need larger datasets of common data to use for testing. The toy datasets that worked fine when you were testing your functions may not do the job any longer. Python provides access to larger datasets that help you perform more complex testing but won’t require you to rely on network sources. These datasets will still load on your system so that you’re not waiting on network latency during testing. Consequently, they’re between the toy datasets and a real-world dataset in size. More important, because they rely on actual (standardized) data, they reflect real-world complexity. The following list tells you about the common datasets: fetch_olivetti_faces(): Olivetti faces dataset from AT&T containing ten images each of 40 different test subjects; each grayscale image is 64 x 64 pixels in size fetch_20newsgroups(subset='train'): Data from 18,000 newsgroup posts based on 20 topics, with the dataset split into two subgroups: one for training and one for testing fetch_mldata('MNIST original', data_home=custom_data_home): Dataset containing machine learning data in the form of 70,000, 28-x-28-pixel handwritten digits from 0 through 9 fetch_lfw_people(min_faces_per_person=70, resize=0.4): Labeled Faces in the Wild dataset, which contains pictures of famous people in JPEG format datasets.fetch_covtype(): U.S. forestry dataset containing the predominant tree type in each of the patches of forest in the dataset datasets.fetch_rcv1(): Reuters Corpus Volume I (RCV1) is a dataset containing 800,000 manually categorized stories from Reuters, Ltd. Notice that each of these functions begins with the word fetch. Some of these datasets require a long time to load. For example, the Labeled Faces in the Wild (LFW) dataset is 200MB in size, which means that you wait several minutes just to load it. However, at 200MB, the dataset also begins (in small measure) to start reflecting the size of real-world datasets. The following code shows how to fetch the Olivetti faces dataset: from sklearn.datasets import fetch_olivetti_faces data = fetch_olivetti_faces() print(data.images.shape) When you run this code, you see that the shape is 400 images, each of which is 64 x 64 pixels. The resulting data object contains a number of properties, including images. To access a particular image, you use data.images[?], where ? is the number of the image you want to access in the range from 0 to 399. Here is an example of how you can display an individual image from the dataset. import matplotlib.pyplot as plt %matplotlib inline plt.imshow(data.images[1], cmap="gray") plt.show() The cmap argument tells how to display the image, which is in grayscale in this case. This tutorial provides additional information on using cmap, as well as on adjusting the image in various ways.

View Article
Data Mapping and Functional Programming

Article / Updated 05-07-2019

You can find a number of extremely confusing references to the term map in functional programming. For example, a map is associated with database management, in which data elements are mapped between two distinct data models. However, for what you see here in regards to functional programming, mapping refers to a process of applying a high-order function to each member of a list. Because the function is applied to every member of the list, the relationships among list members is unchanged. Many reasons exist to perform mapping in functional programming, such as ensuring that the range of the data falls within certain limits. Understanding the purpose of data mapping The main idea behind data mapping is to apply a function to all members of a list or similar structure. Using mapping can help you adjust the range of the values or prepare the values for particular kinds of analysis. Functional languages originated the idea of data mapping, but mapping now sees use in most programming languages that support first-class functions. The goal of mapping is to apply the function or functions to a series of numbers equally to achieve specific results. For example, squaring the numbers can rid the series of any negative values. Of course, you can just as easily take the absolute value of each number. You may need to convert a probability between 0 and 1 to a percentage between 0 and 100 for a report or other output. The relationship between the values will stay the same, but the range won’t. Mapping enables you to obtain specific data views. Performing data mapping tasks with Haskell Haskell is one of the few computer languages whose map function isn't necessarily what you want. For example, the map associated with Data.Map.Strict, Data.Map.Lazy, and Data.IntMap works with the creation and management of dictionaries, not the application of a consistent function to all members of a list (see this Haskell example for details). What you want instead is the map function that appears as part of the base prelude so that you can access map without importing any libraries. The map function accepts a function as input, along with one or more values in a list. You might create a function, square, that outputs the square of the input value: square x = x * x. A list of values, items = [0, 1, 2, 3, 4], serves as input. Calling map square items produces an output of [0,1,4,9,16]. Of course, you could easily create another function: double x = x + x, with a map double items output of [0,2,4,6,8]. The output you receive clearly depends on the function you use as input (as expected). You can easily get overwhelmed trying to create complex functions to modify the values in a list. Fortunately, you can use the composition operator (., or dot) to combine them. Haskell actually applies the second function first. Consequently, map (square.double) items produces an output of [0,4,16,36,64] because Haskell doubles the numbers first, and then squares them. Likewise, map (double.square) items produces an output of [0,2,8,18,32] because squaring occurs first, followed by doubling. The apply operator ($) is also important to mapping. You can create a condition for which you apply an argument to a list of functions. As shown below, you place the argument first in the list, followed by the function list (map ($4) [double, square]). The output is a list with one element for each function, which is [8,16] in this case. Using recursion would allow you to apply a list of numbers to a list of functions. Performing data mapping tasks with Python Python performs many of the same mapping tasks as Haskell, but often in a slightly different manner. Look, for example, at the following code: square = lambda x: x**2 double = lambda x: x + x items = [0, 1, 2, 3, 4] print(list(map(square, items))) print(list(map(double, items))) You obtain the same output as you would with Haskell using similar code. However, note that you must convert the map object to a list object before printing it. Given that Python is an impure language, creating code that processes a list of inputs against two or more functions is relatively easy, as shown in this code: funcs = [square, double] for i in items: value = list(map(lambda items: items(i), funcs)) Note that, as with the Haskell code, you're actually applying individual list values against the list of functions. However, Python requires a lambda function to get the job doe. Here’s the output.

View Article
Types of Data Manipulation Used in Functional Programming

Article / Updated 05-07-2019

When you mention the term data manipulation, you convey different information to different people, depending on their particular specialty. An overview of data manipulation may include the term CRUD, which stands for Create, Read, Update, and Delete. A database manager may view data solely from this low-level perspective that involves just the mechanics of working with data. However, a database full of data, even accurate and informative data, isn’t particularly useful, even if you have all the best CRUD procedures and policies in place. Consequently, just defining data manipulation as CRUD isn’t enough, but when approaching functional programming it’s a start. To make really huge datasets useful, you must transform them in some manner. Again, depending on whom you talk to, transformation can take on all sorts of meanings. The one meaning that isn’t discussed here is the modification of data such that it implies one thing when it actually said something else at the outset (think of this as spin doctoring the data). In fact, it’s a good idea to avoid this sort of data manipulation entirely because you can end up with completely unpredictable results when performing analysis, even if those results initially look promising and even say what you feel they should say.Types of Data Manipulation Used in Functional Programming Another kind of data transformation actually does something worthwhile. In this case, the meaning of the data doesn’t change; only the presentation of the data changes. You can separate this kind of data transformation into a number of methods that include (but aren’t necessarily limited to) tasks such as the following: Cleaning: As with anything else, data gets dirty. You may find that some of it is missing information and some of it may actually be correct but outdated. In fact, data becomes dirty in many ways, and you always need to clean it before you can use it. Machine Learning For Dummies, by John Paul Mueller and Luca Massaron (Wiley), discusses the topic of cleaning in considerable detail. Verification: Establishing that data is clean doesn’t mean that the data is correct. A dataset may contain many entries that seem correct but really aren’t. For example, a birthday may be in the right form and appear to be correct until you determine that the person in question is more than 200 years old. A part number may appear in the correct form, but after checking, you find that your organization never produced a part with that number. The act of verification helps ensure the veracity of any analysis you perform and generates fewer outliers to skew the results. Data typing: Data can appear to be correct and you can verify it as true, yet it may still not work. A significant problem with data is that the type may be incorrect or it may appear in the wrong form. For example, one dataset may use integers for a particular column (feature), while another uses floating-point values for the same column. Likewise, some datasets may use local time for dates and times, while others might use GMT. The transformation of the data from various datasets to match is an essential task, yet the transformation doesn’t actually change the data’s meaning. Form: Datasets come with many form issues. For example, one dataset may use a single column for people’s names, while another might use three columns (first, middle, and last), and another might use five columns (prefix, first, middle, last, and suffix). The three datasets are correct, but the form of the information is different, so a transformation is needed to make them work together. Range: Some data is categorical or uses specific ranges to denote certain conditions. For example, probabilities range from 0 to 1. In some cases, there isn’t an agreed-upon range. Consequently, you find data appearing in different ranges even though the data refers to the same sort of information. Transforming all the data to match the same range enables you to perform analysis by using data from multiple datasets. Baseline: You hear many people talk about dB when considering audio output in various scenarios. However, a decibel is simply a logarithmic ratio. Without a reference value or a baseline, determining what the dB value truly means is impossible. For audio, the dB is referenced to 1 volt (dBV). The reference is standard and therefore implied, even though few people actually know that a reference is involved. Now, imagine the chaos that would result if some people used 1 volt for a reference and others used 2 volts. dBV would become meaningless as a unit of measure. Many kinds of data form a ratio or other value that requires a reference. Transformations can adjust the reference or baseline value as needed so that the values can be compared in a meaningful way. You can come up with many other data transformations. The point of this information is to point out that the method used determines the kind of data transformation that occurs, and you must perform certain kinds of transformations to make data useful. Applying an incorrect transformation or the correct transformation in the wrong way will result in useless output even when the data itself is correct.

View Article
How to Use Haskell Libraries for Functional Programming

Article / Updated 05-07-2019

Haskell has a huge library support base in which you can find all sorts of useful functions. Using library code for functional programming is a time saver because libraries usually contain well-constructed and debugged code. The import function allows you to use external code. The following steps take you through a simple Haskell library usage example: Open GHCi, if necessary. Type import Data.Char and press Enter. Note that the prompt changes to Prelude Data.Char> to show that the import is successful. The Data.Char library contains functions for working with the Char data type. You can see a listing of these functions. In this case, the example uses the ord function to convert a character to its ASCII numeric representation. Type ord(′a′) and press Enter. You see the output value of 97.How to Use Haskell Libraries for Functional Programming You can obtain datasets for Haskell that can be used in functional programming, but first you need to perform a few tasks. The following steps will work for any platform if you have installed Haskell correctly. Open a command prompt or Terminal window with administrator privileges. Type cabal update and press Enter. You see the update process start. The cabal utility provides the means to perform updates in Haskell. The first thing you want to do is ensure that your copy of cabal is up to date.How to Use Haskell Libraries for Functional Programming Type cabal install Datasets and press Enter. You see a rather long list of download, install, and configure sequences. All these steps install the Datasets module onto your system. Type cabal list Datasets and press Enter. The cabal utility outputs the installed status of Datasets, along with other information. If you see that Datasets isn't installed, try the installation again by typing cabal install Datasets --force-reinstalls and pressing Enter instead. The Boston Housing dataset can be used as an example, so the following steps show how to load a copy of the Boston Housing dataset in Haskell. Open GHCi or WinGHCi. Type import Numeric.Datasets (getDataset) and press EnterNotice that the prompt changes. In fact, it will change each time you load a new package. The step loads the getDataset function, which you need to load the Boston Housing dataset into memory. Type import Numeric.Datasets.BostonHousing (bostonHousing) and press Enter. The BostonHousing package loads as bostonHousing. Loading the package doesn't load the dataset. It provides support for the dataset, but you still need to load the data. Type bh <- getDataset bostonHousing and press Enter. This step loads the Boston Housing dataset into memory as the object bh. You can now access the data. Type print (length bh) and press Enter. You see an output of 506.

View Article
What is Functional Programming?

Article / Updated 05-07-2019

Functional programming has somewhat different goals and approaches than other paradigms use. Goals define what the functional programming paradigm is trying to do in forging the approaches used by languages that support it. However, the goals don’t specify a particular implementation; doing that is within the purview of the individual languages. The main difference between the functional programming paradigm and other paradigms is that functional programs use math functions rather than statements to express ideas. This difference means that rather than write a precise set of steps to solve a problem, you use math functions, and you don’t worry about how the language performs the task. In some respects, this makes languages that support the functional programming paradigm similar to applications such as MATLAB. Of course, with MATLAB, you get a user interface, which reduces the learning curve. However, you pay for the convenience of the user interface with a loss of power and flexibility, which functional languages do offer. Using this approach to defining a problem relies on the declarative programming style, which you see used with other paradigms and languages, such as Structured Query Language (SQL) for database management. In contrast to other paradigms, the functional programming paradigm doesn’t maintain state. The use of state enables you to track values between function calls. Other paradigms use state to produce variant results based on environment, such as determining the number of existing objects and doing something different when the number of objects is zero. As a result, calling a functional program function always produces the same result given a particular set of inputs, thereby making functional programs more predictable than those that support state. Because functional programs don’t maintain state, the data they work with is also immutable, which means that you can’t change it. To change a variable’s value, you must create a new variable. Again, this makes functional programs more predictable than other approaches and could make functional programs easier to run on multiple processors. Keep reading for additional information on how the functional programming paradigm differs. Understanding the goals of functional programming Imperative programming, the kind of programming that most developers have done until now, is akin to an assembly line, where data moves through a series of steps in a specific order to produce a particular result. The process is fixed and rigid, and the person implementing the process must build a new assembly line every time an application requires a new result. Object-oriented programming (OOP) simply modularizes and hides the steps, but the underlying paradigm is the same. Even with modularization, OOP often doesn’t allow rearrangement of the object code in unanticipated ways because of the underlying interdependencies of the code. Functional programming gets rid of the interdependencies by replacing procedures with pure functions, which requires the use of immutable state. Consequently, the assembly line no longer exists; an application can manipulate data using the same methodologies used in pure math. The seeming restriction of immutable state provides the means to allow anyone who understands the math of a situation to also create an application to perform the math. Using pure functions creates a flexible environment in which code order depends on the underlying math. That math models a real-world environment, and as our understanding of that environment changes and evolves, the math model and functional code can change with it — without the usual problems of brittleness that cause imperative code to fail. Modifying functional code is faster and less error prone because the person implementing the change must understand only the math and doesn’t need to know how the underlying code works. In addition, learning how to create functional code can be faster as long as the person understands the math model and its relationship to the real world. Functional programming also embraces a number of unique coding approaches, such as the capability to pass a function to another function as input. This capability enables you to change application behavior in a predictable manner that isn’t possible using other programming paradigms. Using the pure approach to functional programming Programming languages that use the pure approach to the functional programming paradigm rely on lambda calculus principles, for the most part. In addition, a pure-approach language allows the use of functional programming techniques only, so that the result is always a functional program. The pure-approach language is Haskell because it provides the purest implementation, according to articles about functional programming. Haskell is also a relatively popular language, according to the TIOBE index. Other pure-approach languages include Lisp, Racket, Erlang, and OCaml. As with many elements of programming, opinions run strongly regarding whether a particular programming language qualifies for pure status. For example, many people would consider JavaScript a pure language, even though it’s untyped. Others feel that domain-specific declarative languages such as SQL and Lex/Yacc qualify for pure status even though they aren’t general programming languages. Simply having functional programming elements doesn’t qualify a language as adhering to the pure approach. Using the impure approach to functional programming Many developers have come to see the benefits of functional programming. However, they also don’t want to give up the benefits of their existing language, so they use a language that mixes functional features with one of the other programming paradigms. For example, you can find functional programming features in languages such as C++, C#, and Java. When working with an impure language, you need to exercise care because your code won’t work in a purely functional manner, and the features that you might think will work in one way actually work in another. For example, you can't pass a function to another function in some languages. At least one language, Python, is designed from the outset to support multiple programming paradigms. In fact, some online courses in programming make a point of teaching this particular aspect of Python as a special benefit. The use of multiple programming paradigms makes Python quite flexible but also leads to complaints and apologists. Python is great for to demonstrating the impure approach to functional programming because it’s both popular and flexible, plus it’s easy to learn.

View Article
page 1
page 2