img
Paul S Prueitt

Paul S. Prueitt

Biography

Professor Prueitt has taught mathematics, physics and computer science courses in the nation's community colleges or in universities or four-year colleges. He has served as Research Professor in Physics at Georgetown University and Research Professor of Computer Science at George Washington University. He has served as Associate Professor or Assistant Professor of Mathematics at HBCUs in Virginia, Tennessee, Alabama and Georgia. Prueitt was co-director of an international research center at Georgetown University (1991-1994). He is a NSF reviewer and Principle Investigator. He served for over a decade as an independent consultant focused on information infrastructure, software platforms and intelligence algorithms. He has consulted on national intelligence software platforms and continues this work under private contracts.

His post Master's training in pure and applied mathematics focused on real analysis, topology and numerical analysis. His PhD, earned in 1988 from The University of Texas at Arlington, was developed using differential and difference equations as models of neural and immunological function. He has over forty-five publications in journals, books or as conference papers.

Motivated by a desire to understand the nature of the American educational crisis, he served for seven years at a Historical Black College or University, or at an open door minority serving institution. He currently teaches mathematics learning support courses, in Atlanta, using a deep learning method. The method uses four steps to bring mathematics learning support students to a college level understanding. The method is motivated from a study of behavioral neuroscience and properties of immune response mechanisms.

Contributions

rss  subscribe to this author

Bookmarks



Service-Oriented Architecture and Data Mining: A Step Towards a Cognitive Media? Published: July 24, 2013 • Service Technology Magazine Issue LXXIV PDF

Abstract: This article makes the case that national security is impacted by systemic educational failure, particularly in higher mathematics. If one assumes that America is experiencing an educational crisis, then the proper development of new educational tools is a matter of national security.

We suggest that time and resources be spent, as IBM is doing, on the development of data mining applications using service-oriented governance principles. We take the position that these tools should be applied to educational data, in particular student learning outcomes in first-year college mathematics. Knowledge representation may be created from data mining, resulting in models over what has been learned and what the individual has not learned.

A modern understanding of cognitive neuroscience will enrich individualized models. Models then may be utilized to optimize how curriculum materials are presented to the individual, resulting in adaptive deep learning. Governance policy is then applied to enforce core principles, including those arising from the definition of deep learning. Deep learning is seen as a modification of underlying memory engrams.

Properly defined services may be said to understand individualized models. Using governance principles basic to stratified service architecture, learning tasks may be supported and assessed within unified enterprise architecture. Governance, along with service definition and model definition, could create a national assessment strategy that moves K-12 and higher education towards specific goals.

Overview

Today we see the development of leading edge cloud-based applications supporting businesses, government, and academic enterprise. These applications may be close to a critical mass where the general public will suddenly have the use of results from mining massive data sources in realtime.

It is possible to see how such a critical mass might be achieved by looking at the rapid merge between data mining and service-oriented architecture. But there is a third element, and this is the neuroscience of cognition. We make the suggestion that widely available knowledge about the processes of cognition is necessary if a critical mass is to be achieved. The question arises regarding whether or not this is possible. Principled governance is needed to integrate clear perceptions from cognitive neuroscience with service-oriented architecture.

We note that one consequence of the crisis in education is that the public does not currently have common knowledge about data mining or knowledge representation. One may conjecture that if the crisis were ended, new markets will develop and economic stimulation will occur. The question about what is possible is then germane. It is a question of achieving proper balance when creating service governance policy.

Personalized "knowledge systems" must be engineered, but not overly engineered. This is not a new statement within the circles that debate artificial intelligence (AI) scholarship. Artificial intelligence methods advance our need to supply detail to a machine-like mechanism necessary for the efficiencies of processing massive data. But the strong form of AI imposes too much structure, causing flexibility to suffer. However, service-oriented governance over the complex workings of large information systems provides the required flexibility [REF-1, 2, 3]. It does so by relying more on governance principles than on hardwired stimulus response.

This issue of flexibility is one that is addressed in a weak form of AI, in which the neuroscience is fully, not partially but fully, taken into account. The goal of this type of technology is to aid human cognition rather than to replace it.

If knowledge about the processes of cognition were merely something technical and such that only PhDs might understand, then the social change we expect would not, perhaps, be so deep. We are in luck, however. Public progress is being made on both knowledge representational methods and in cognitive neuroscience.

Reification

Results from standardized data mining are clearly being "reified" into complex models of everyday events and entities. This is evidenced from any topic map, cognitive map, or mind map [REF-4]. The barriers to a common understanding about how to use these models have been resilient, which may be changed if the critical mass regarding social understanding is achieved.

Recent trends in automated machine reification processing are complex and in some ways mirror the formation of thought in humans. Because of the similarity between these weak forms of AI and private familiarity with our own human thought processes, we may be able to bridge common mental processes to computational aids and then to cognition [REF-5].

It is true that the public understanding of the relationship between ontology reification from data mining and service structure is barely on the horizon. We do find an absence of a proper and common perception about the issues delineated by weak and strong forms of AI. This absence is a factor in considering what might be developed in the data mining application space.

We are therefore faced with a chicken and egg enigma. We need a better educational process to end the crisis in education. This impasse suggests a role that can be assumed by large established textbook publishers or the federal government, or both. However, a dedicated large-scale effort will not be sufficient unless well-conceived design principles are in place. SOA governance, service design principles, and uniformity in the development of models are available. Without this type of scaffolding the necessary technical effort is doomed to failure [REF-6].

The machine reification of knowledge representations is defined to be from the automated use of instance-data to construct a representation of the object. An object model is produced from instance-data. The technical language is complete with the notion of category and instance, or class and object membership in a class. Service-oriented design creates viable and reusable parts for information processing that are then assembled using governance principles. This is the breakthrough that is now here.

Theory

Human awareness uses a substrate, much like how human speech uses the phonetic substrate. Deploying a similar computational substrate will enhance the use of data mining. Parts of memory reside within the hippocampus and are assembled into coherent states through an integrative function of various brain regions [REF-7]. Likewise, data mining might produce a substrate consisting of models of different parts of learning processes.

Different types of learners have different substrates. By identifying sets of learning parts, we might gather together, within a virtual system, individuals with common abilities or challenges. Small learning communities might be identified, within something similar to Massive Open Online Courses, based on topic level knowledge representations. Economies of scale as well as efficiencies in targeted communication can then become possible.

Underlying theory may be reinforced within a new type of social media. Human thought, collectively and as part of individual experience, is a composition made from highly stable sets of basic elements, similar to reflect arcs [REF-8]. This theory has some necessarily complex elements. Service governance must enforce these elements if social media is to have the desired properties. As Pribram pointed out, the notion of a reflect arc is a simplification of something also involving a gestalt-type assembly into active awareness [REF-9]. Our point is that the issue concerning how atoms are composed is not well handled by the strong forms of AI. A weak form of AI and a governance policy are needed to work as part of highly functional service architecture.

For example, a religious or political organization might share certain sets of these parts of thought. Each individual is in some sense reinforcing subsets of how "things" may be thought of. If one moves to a different social organization, there may be an entirely different set of basic elements. An evolution of a shared set sometimes occurs, resulting in deep adaptation to changed circumstances. For example, the 9-11 experience may have deeply and irreversibly changed how some sub-communities in the United States regard personal security concerns.

Stratification

A theory is proposed to explain why our first-year students, and the American public in general, are developing less insight into higher mathematics than is reasonable. The theory reflects a physical fact that nature, or physical reality, organizes into strata. Cross-organizational scale interactions hold each strata in a zone in which things emerge and dissipate. We see this in, for example, the structure-function separation between atoms and chemical compounds. The expression of phenotype from genotype is another example, as is the presence of speech phonics [REF-10].

Stratification theory suggests that human learning must involve two processes. The first is a modification of how very stable elements of individual perception are put together as acts of conscious awareness. The second is in a modification of the base elements themselves. The first is shallow learning. The second is deep learning.

To achieve deep learning, the individual must internalize the experience of knowledge and then re-express this knowledge as a creative act. In the present situation, first-year students are unable to internalize math curriculum because their sets of memory "engrams" are inhibiting their advancement into the foundational concepts. In this sense, the community of all students is like a corporation that has no sound underlying governance infrastructure. The use of well-known AI algorithms within a realtime learning assessment system could provide students with coherent governance that is directed at opening their access to higher mathematics and science.

img

Figure 1 - The Stratification theory states that base elements, engrams, and supporting memory are aggregated as part of conscious awareness.

In addition to a lack of clear governance, the community of students has been expressing self-damaging behaviors. Inhibiting engrams may have formed from poor and incomplete instruction during K-12 math classes. Simply put, our youth may have learned how to not learn math. Most students have become incapable of carrying out drill and practice instructions, which may explain why the process of college instruction in mathematics has been experiencing increasing failure.

The comparison to organizational makeup is remarkable. For example, the stovepipes that existed in the last decades of the past century remain pretty much intact, except for where a redesign process using governance and service principles provided a unifying framework. New governmental processes are finding increased efficiencies and reduced costs. What stands in the way is social resistance, often based on uncertainty.

Classroom Governance

It is reasonable to say that our educational system has been designed without the best governance, much like enterprise service architecture before the advent of service-oriented design. Current tacit classroom governance is based on common faulty assertions. Not being able to demonstrate skill means either that the individual is just not capable of learning the skill, or that the individual has not yet had the material presented.

Textbook design reflects these two assumptions, as do the lecture-based instructional model. The theory being advanced here is that many individuals were improperly exposed to labels, such as "number", "function", and "set," and that these labels acquire false meanings that are often associated with a sense of disdain. A refactoring of the entire process is necessary and feasible. A new governance model may open access where the door is presently closed.

The goal is now clear: a set of common memory engrams must be modified. This modification cannot occur if the instructional model continues to have those assumptions. On the other hand, revisiting the core concepts that are in K-12 education within an adaptive deep learning environment allows first-year college students to get a handle on false labeling. This is what is being proposed in Prueitt's National Educational Bridge [REF-11].

The Next Data Mining Market

The next revolution in information technology may have an impact similar to what we saw in the 1980s. The individual was empowered as we moved from centrally located mainframe computing to desktop computers. This empowerment of the individual continues today as smartphone and smart tablet technology blankets the planet. However, certain cultural and technological limitations are felt. The technology seems to be in the way. Flexibility that is made available from proper governance policies and from service design may be the correct means to remove this limitation.

A critical factor will separate past social revolutions from those that are now occurring. Something new will happen. Instead of increasing the amount of information, or data overload, new technologies will synthesize massive data sources into information that is easily consumed as individualized human aware knowledge. The control over data-to-knowledge transformation might shift to the individual who is acting for his or her benefit in everyday life.

One enabler of this change will be standards for communicating the results of data mining. The Cross Industry Standard Process of Data Mining (CRISP-DM) [REF-12] defines six major phases for data mining: business understanding, data understanding, data preparation, modeling, evaluation, and deployment. This process model may be compared with Prueitt's work in the late 1990s that focused on establishing a neurological grounding [REF-13, 14]. There is an action-perception cycle that is instrumented, in which the first step is perceptual measurement.

Service-oriented architecture [REF-15] will play a role in every aspect of the transformation of data into experienced human knowledge. The promise is that both data mining and knowledge discovery in database methods might be normalized as transparent computable services. The role of our common perception regarding how human knowledge comes about is critical, if normalization is to result in the economic renewal we anticipate.

The Advent of Knowledge Discovery

Data mining has evolved over the last several decades. Algorithmic methods combine knowledge representation, domain-specific knowledge, and analytic methods to uncover hidden relationships, identify patterns of relationships, and associate specific data to model trends. Particular interest has been on military and intelligence applications, and nascent use in advertising, political campaigns, and supply chain analysis.

Text extraction to ontology methods have been developed to support biomedical analysis and as part of emergency response to biological threats, such as influenza virus mutation [REF-16]. Science literature has been subject to increasing automated data mining as a means of directing researchers, in case the flu virus or other biological agents were to suddenly manifest as a threat to human health. All of these collective social activities are improved using unified governance, service definition from a set of underlying functional service components, and specific linkage to process models.

The use of knowledge representational methods in decision-making and in modeling has become more widespread. Extensive work on knowledge discovery from massive data sets occurs in almost every country in the world. The quality of modern service architecture supports a lifecycle approach in which the purpose of data mining, data sources, definition of algorithms, definition of metadata, deployment issues, and monitoring results are evolved within several well-known standards.

If only it were that simple. Knowledge discovery in databases comes complete with a class of limitations that are difficult to see. The neurology of perception helps to understand these limitations, work within them, and eventually work around or overcome the limitations. The key is to build linkage between service components and the atoms involved in human learning. In a very real sense, we seek to give the human computer interface a set of cognitive constructions and a perceptual system. This will usher in a new era, meaning this task cannot be underestimated.

Data mining programs now support biomedical research, earth-oriented data analysis, the representation of trends in social thought, supply chain analysis, education [REF-17], and other similar areas of inquiry. Our growing worldwide development of intelligence technology suggests that markets will develop in areas in which an open evolution of methods will occur and new areas of open application identified.

Educational Failure is National Security Failure

The potential social value from data mining is achieved when data becomes represented in a computable form as abstracted models of everyday events and entities. These models are called ontological models and typically use KIF, OWL, RDF, or topic maps. However, a critical barrier becomes apparent if the following question is asked: How many average American citizens know what ontological models are?

Every social revolution has profound consequences. We need to be the ones pioneering the change, rather than following along after everyone else is already using Smart Data Mining (SDM) with SOA governance. Why not view this as an opportunity to develop educational technology for the masses? The current market is one way in which we move toward this goal, with suppliers producing models for consumers. The next revolution is where consumers produce and share these models.

To review, common knowledge of how automated reification builds knowledge representations is not present in current educational programs. There are reasons for this, including the use of this technology in classified settings and the artificiality of artificial intelligence, at least so far. Fixing this problem is essential if our society is to enjoy the use of the methods that are now being developed in bioengineering, bioinformatics, massive data analysis in education, and in the governance of systems like the national healthcare infrastructure.

A working knowledge regarding how thoughts come into existence within our minds is needed, to allow the individual to be an actor in an analysis that is an instance of group thought. Neurology needs to inform knowledge engineering of more than it currently does. As the transformation of services into individualized working knowledge becomes common, consumers might gain control over how results are used.

In this article, we have made the case that data mining plus SOA has applications in spaces other than those of military and intelligence. IBM is exploring these spaces with the (CRISP-DM) standard [REF-18].

Data Mining Applications in Education

The work now underway is a prototype of a new type of Massive Open Online Course (MOOC) software system. Every MOOC is currently very similar to a one-way lecture, with little communication flowing back from the hundreds of thousands of students. Our proposal would field a Massive Online Tutorial System (MOTS) [REF-19]. Deep learning algorithms and a representation of topics in the curriculum are to be integrated with new pedagogy [REF-20].

Individualization will occur as a result of data mining and representational models of learning becoming available to place individuals into learning communities that have common difficulties and opportunities. Vast numbers of MOOC students may be divided into smaller learning communities. An iterative reassignment of membership into learning communities allows the individual to move within the MOTS, depending on demonstrated skill, synthesis, and evaluative assessments. Data mining and micro-selection within a service-oriented architecture are part of the necessary design specifications. Architecture is essentially a mechanism through which collective intelligence may manifest within social networks.

Conclusion

The coming revolution will not be specific only to education, and will have a generality that allows for individual creation and the social exchange of representational models. This generality is somewhat of a surprise, which overcomes misrepresentation. Traditional approaches to computational knowledge representation have not followed neuroscience, which can lead to generality when it is followed.

Our approach is to use a weak form of AI in which cognitive media is not thought to have a capacity for precision. Cognitive media (CM) is thought to serve as a communication medium. The surprise comes in the form of the mediation of transformations of massive data into individually evocative symbol sets. This mediation is expressed in such a way that average individuals, untrained in abstract formalisms and the particulars of a specific technology system, are able to freely and easily interact.

References

[REF-1] Heffner, Randy. (Webinar) Enhance SOA Flexibility with Policy Management. Accessed at:
www.forrester.com/Enhance+SOA+Flexibility+With+Policy+Management/-/E-WEB2342 (2008)

[REF-2] O'Brien, Adelaide. New Possibilities, Flexibility with SOA. Accessed at: www.accenture.com/us-en/Pages/insight-new-approach-possibilities-flexibility-soa-adelaide-obrien.aspx (2012)

[REF-3] Stephen Bennett, Thomas Erl, Clive Gee, Robert Laird, Anne Thomas Manes, Robert Schneider, Leo Shuster, Andre Tost, Chris Venable. SOA Governance: Governing Shared Services On-Premise and in the Cloud. Prentice Hall: Toronto (2010)

[REF-4] Novak, Joe. Concept Maps: Theory, Methodology, Technology, Proceedings of the First International Conference on Concept Mapping. Pamplona, Spain (September, 2004)

[REF-5] Prueitt, Paul. Stratification Theory as Applied to Neural Architecture Enabling a Brain-Like Function for Social Networks. Presented at:
Winter Chaos Conference, Blueberry Brain Institute, Southern Connecticut State University (2011)

[REF-6] Rosa, Manuel and Sampaio. Promoting Organizational Visibility for SOA and SOA Governance Initiatives Part I. servicetechmag.com/I73/0613-4 (2013)

[REF-7] Prueitt, Paul. A Theory of Process Compartments in Biological and Ecological Systems. IEEE Workshop: Architectures for Semiotic Modeling and Situation Analysis in Large Complex Systems. Monterey, CA (August, 1995)

[REF-8] Pribram, Karl. Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. Prentice-Hall: New Jersey (1971)

[REF-9] Pribram, Karl (ed.). Rethinking Neural Networks: Quantum Fields and Biological Data. Erlbaum: New Jersey (1993)

[REF-10] Borowsky R, Esopenko C, Cummine J, Sarty GE. Neural Representations of Visual Words and Objects: A Functional MRI Study on the Modularity of Reading and Object Processing. Brain Topography Journal: issue 20, 2 (2007)

[REF-11] Prueitt, Paul. American Education Bridge, Technology and Pedagogy. Presented at:
3rd International Conference on Education, Training and Informatics. Orlando, Florida (March, 2012)

[REF-12] Wiki Page:en.wikipedia.org/wiki/Cross_Industry_Standard_Process_for_Data_Mining

[REF-13] Prueitt, Paul. Core Ontology with Stratified Architecture. Published at:
www.educationworlds.com/pdf/Architecture.pdf Safeguarding National Security and Productivity (2013)

[REF-14] Prueitt, Paul. Technical Foundations to Stratified Theory and Articulated Machines. (2011)

[REF-15] Prueitt, Paul. Articulating SOA in the Cloud. http://www.soamag.com/I34/1109-4.php (2009)

[REF-16] E. Aramaki, S. Maskawa, and M. Morita. Twitter Catches the Flu: Detecting Influenza Epidemics Using Twitter. Presented at:
Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1568–1576 (2011)

[REF-17] Luan, Jing. Data Mining Applications in Higher Education. Accessed at: public.dhe.ibm.com/common/ssi/ecm/en/imw14303usen/IMW14303USEN.PDF (2010)

[REF-18] Wiki Page: en.wikipedia.org/wiki/Cross_Industry_Standard_Process_for_Data_Mining

[REF-19] Prueitt, Paul. Stratification Theory as Applied to Neural Architecture Enabling a Brain-Like Function for Social Networks. Presented at:

Winter Chaos Conference of the Blueberry Brain Institute, Southern Connecticut State University (March, 2011)

[REF-20] Prueitt, Paul. Deep Learning Methods and Adaptive Assessment. Presented at:
University System of Georgia Teaching and Learning Conference: Best Practices for Promoting Engaged Student Learning (2013)