Measurement Theory and Lie Detection
It is sometimes said that it is not possible to actually measure a lie by lie detection. Simplistic and concrete thinkers, and those opposed to the polygraph test, are content to end the discussion at this point and offer the impulsive and erroneous conclusion that scientific tests for lie detection and credibility assessment are not possible. This conclusion is erroneous, a non-sequitur, because many areas of science involve the quantification of phenomena for which direct physical measurement is not possible. The theory of the polygraph test, and lie detection and credibility assessment in general, in fact does not involve the measurement of deception or truth-telling. Nor does it involve the measurement, or recording, of fear or any other specific emotion.
Polygraph Test Accuracy 90% – Read Case Studies and Research
This publication attempts to introduce and orient the reader to measurement theory and its application to the problem of the polygraph and scientific lie detection or credibility assessment testing. The analytic theory of the polygraph is that greater changes in physiological activity are loaded at different types of test stimuli as a function of deception or truth-telling in response to the relevant target stimuli (Nelson, 2015a, 2016; Senter, Weatherman, Krapohl & Horvath, 2010).
In the absence of an analytic theory or hypothesis of polygraph testing, polygraph theories have previously been expressed in terms intended to describe the psychological process or mechanism responsible for reactions to polygraph test stimuli. Although much has been learned about the recordable physiology associated with deception and polygraph testing, less work has been done to investigate psychological hypotheses about deception. In general, the psychological basis of the polygraph is presently assumed to involve a combination of emotional, cognitive and behaviorally conditioned factors (Handler, Shaw & Gougler, 2010; Handler, Deitchman, Kuczek, Hoffman, & Nelson, 2013; Kahn, Nelson & Handler, 2009).
The analytic theory of polygraph testing implies that there are physiological changes associated with deception and truth-telling, and that these changes can be recorded, analyzed, and quantified through the comparison responses to different types of test stimuli. Comparison and quantification are objectives central to measurement theory. Application of measurement theory to the polygraph test will require at least a basic understanding of measurement theory.
Types of measurement Stevens (1946) attempted to provide a framework for understanding types of measurement. At that time, part of the intent was to clarify the selection of statistical and analytic methods associated with different types of measurement data. It was evident almost immediately that the selection of statistical was a more complex endeavor than could be characterized by the reduction of the array of data types and scientific questions to a small set of categories. Nominal scales are without any rank order meaning (e.g., cat, mouse, dog, ostrich, zombie, robot). Mathematical transformation of nominal items is not possible. Ordinal measurements have rank order meaning but have imprecise meaning about the distance between items (e.g., knowing the first, second and third place winners of an ostrich race does not provide information about the difference in race times). Some mathematical transformations are possible with ordinal measurements, with the requirement that they preserve the ordinal information and meaning. Interval scale measurement have both rank order meaning and provide meaningful information about the difference between items. However, the zero point of an ordinal scale is arbitrary and therefor meaningless.
A classical teaching example for the arbitrariness of an interval-scale zero point is a temperature scale for which we have both the Fahrenheit and Celsius scales with different arbitrary zero points, and no expectation that zero means that there is no temperature or no heat to be measured. Ratio measurements include combination of rank order meaning and interval distance meaning along with the notion of a non-arbitrary zero point. In ratio scales measurements zero means none (e.g., no difference). Later, Stevens (1951) offered a set of prescriptions and proscriptions as to the type of statistics that are appropriate for each type of data. The most common form of criticism of Stevens have focused on the fact that it is unnecessarily restrictive (Velleman & Wilkinson, 1993), resulting in the overuse of non-parametric methods that are known to be less efficient than parametric methods (Baker, Hardyck, & Petrinovich, 1966; Borgatta & Bohrnstedt, 1980), and that the type of analysis should be determined by the research question to be asked (Guttman, 1977; Lord, 1953; Tukey, 1961). Luce (1997) asserted directly that measurement theorists today do not accept Stevens’ overly broad definition of measurement. Nevertheless, Stevens’s work provides a useful introduction to the conceptual language and problems of measurement theory.
Measurement theory is an area of science concerned with the investigation of measurability and what makes measurement possible. Helmholtz (1887) began the tradition of scientific and philosophical inquiry into measurement theory by asking the question “why can numbers be assigned to things”, along with other questions such as “what can be understood from those numbers”? According to Campbell (1920/1957), measurement is the process of using numbers to represent qualities. In general, the properties of measurable phenomena must in some ways resemble the properties of numbers. Later work by Suppes (1951) on the differences between measurable and un-measurable phenomena and began to formalize the tradition of measurement theory by clarifying our understanding of the requirements for measurement and gave rise to a modern representational theory of measurement (Diez, 1997; Suppes, 2002; Suppes & Zinnes, 1963; Suppes, Krantz, Luce, & Tversky, 1989; Niederee, 1992). Stated simply, the representational theory of measurement involves the assignment of numbers to physical phenomena such that empirical or observable relationships are preserved.
The existence of order (rank order) relationships between measurable objects is central to the requirements for the measurability of any phenomena. We must be able to quantify one instance of the phenomena as have greater magnitude than another. Another central requirement of measurable phenomena is that there must be a way of combining measurable objects in a way that is analogous to mathematical addition. This is, the addition of measurable phenomena must have a sensible physical interpretation. These are among the main differences between measurable and un-measurable phenomena. For example: measurements can be applied to physical phenomena such as a person’s height, weight, and blood pressure. This is possible because these things involve physical phenomena: the linear or unitized distance from head to toe, the gravitational force on a person’s physical mass, and the unitized pressure required to overcome and occlude arterial pressure relative to a reference point such as average atmospheric pressure at sea level (i.e., 29.92inHg or 760mmHg).
These phenomena can be combined in ways that are in some way analogous to numerical addition. That is, there is some coherent physical interpretation to additive combinations of different instances of these physical phenomena. Time limited events can also be measured. For example: if a person jumps into the air two times and if we mark the physical height of each jump and then combine the two distances, then this is also analogous numerical addition. However, attempts to record physiological changes to polygraph stimuli does not necessarily conform to these requirements for rank order relationships and additivity. The details of how recorded polygraph data can result in the quantification of deception and truth-telling are addressed in the remainder of this publication. Firstly, it has long been established that responses to polygraph stimuli cannot be taken or interpreted directly as a measurement of deception. Nor can responses to polygraph stimuli be interpreted as a recording or measurement of fear or any other specific emotion. Responses to polygraph stimuli are a form of proxy or substitute data for which there is a relationship or correlation with deception and truth-telling.
The reactions and recorded data themselves are neither deception nor truth-telling per se. Secondly, although it may be possible to interpret rank-order the relationships between test stimuli according to the magnitude of response, polygraph recording instrumentation today has not been designed to provide data that satisfy the additivity requirement for measurement data. In other words, attempts to make any sensible additive combination of the actual response data within each of the respiration, cardio, electrodermal and vasomotor sensors is neither intended or established. Instead, polygraph data must be transformed to a more abstracted form before it can be further analyzed and interpreted as to their meaning. Polygraph scoring and analysis algorithms, whether manual or automated, are intended to accomplish and facilitate such transformation, analysis and interpretation.1 Fundamental and derived measurements Some measurements can be referred to as fundamental and require no previously measured phenomena to achieve their determination. The main requirement for a fundamental measurement is that there are some physical phenomena for which there is 1. A major difference between manual an automated polygraph analysis algorithms is that manual scoring protocols were developed during a time when field practitioners did not have access to and were unfamiliar with use of powerful microcomputers. Manual scoring algorithms therefore rely on mathematical transformations that are, of necessity, very simple, if not somewhat blunt. Earlier versions of manual scoring protocols did not make use of normative reference distributions, statistical corrections or confidence intervals. Another major difference is that manual scoring protocols accomplish feature extraction tasks – the extraction of signal information from other recorded information and noise – using subjective visual methods. Automated analysis algorithm will make use of more advanced statistical methods, and will rely on objective and automated feature extraction methods that are less vulnerable to subjective interference.
Some quantity that can be understood as either more or less (e.g. is it heavy) as opposed to phenomena that are better understood as all-or-nothing (e.g., is it an ostrich). If we have two ostriches, it makes some sense to ask a question such as which ostrich is heavier because there is meaningful intuition around the idea that some ostriches are heavier. But it does not make sense to ask the question which is more an ostrich, because there is no meaningful intuition that can be gained from its answer. Being an ostrich is a property, not a quantity. The weight of an ostrich is also a property, and this illustrates that some properties can also be quantities. The physical phenomena of weight or heaviness can be quantified to achieve greater precision than simply saying very heavy or very very heavy when attempting to compare the weight of two ostriches. Without the use of numerical quantities, two different observers might reach two different conclusions about which ostrich is heavier no matter how we attempt to use our descriptive adjectives. Different observers are more likely to reach similar conclusions when using measurements vs. the alternative of not using measurements. The use of measurements permits us to think about, understand, describe and plan the world around us with greater precision, which is to say greater reproducibility. When a measurement is not intended or not expected to be a precise or exact quantity it is sometimes referred to as an estimate.
Probabilities, because they are not expected to be exact, are estimates. Although some may use or express the notion of probabilities subjectively, reproducibility of computational probability estimates is an important difference between the scientific and unscientific use of the concept of probability. Some measurements can be thought of as derived, because these are achieved not through the direct quantification of a physical phenomenon, but through the comparison of an unquantified physical phenomenon with another known physical phenomenon. In principle, we can measure an unknown distance if we have some other distances and angles that are already known. For example, if we place a set of satellites in orbit around the earth we can calculate and know the locations of those satellites relative to a set of objects for which the locations are known on the earth. Then, if we have some means of receiving information from the satellites with known locations, we can use the information from the satellites to calculate and measure our own location if our location is unknown.
This would be like older practices in which if we can calculate the location of objects in the solar system according to a system of counting or quantifying the number of days since a previously observed event, then we can use the location of the object in the solar system. And the location of objects in the solar system could be used, along with a defined system of scientific and mathematical rules, to measure or quantify our current location on the earth. Another example of a derived measurement is the measurement of blood pressure, for which we use our knowledge about atmospheric pressure to quantify our assessments of cardio pressures during the systolic and diastolic phases of the cardiac cycle. Scientific testing as a form of (probabilistic) measurement As it happens, many interesting and important phenomena cannot be either observed directly or are not subject to physical measurement.
This is sometimes because the phenomenon of interest is amorphous (without physical substance), and sometimes because the information does not conform to the order and additivity requirements of measurement. If we want to improve the precision of our assessment and decisions for these phenomena we will need to rely not on measurements but on scientific tests that quantify a phenomenon of interest using statistics and probability theory. Nelson (2015b) provided a description of how a polygraph test, and tests in general, can be thought of as a single subject science experiment. Scientific tests can also be thought of as a form of probabilistic measurement, in which statistical and probability theories are used to quantify a phenomenon that is not amenable to actual measurement.
An example of scientific testing as a form of probabilistic measurement is the testing measurement of amorphous and un-measurable psychological phenomena such as personality and intellectual functioning, during which an observed quantity of data from an individual is compared mathematically to a known quantity in the form of normative reference distribution, or probability reference model, that characterizes our knowledge of what we expect to observe. Reference models can be calculated empirically, through statistical sampling methods, and can also take the form of theoretical reference distributions that characterize our working theories about how the universe, or some small part, works by relying only on facts and information that are subject to mathematical and logical proof. In the case of the polygraph test – for which the basic analytic theory holds that greater changes in physiological activity will be loaded for different types of test stimuli as a function of deception and truth-telling in response to the relevant stimuli – it is not the comparison of relevant and other test questions that forms the basis of our conclusions. Instead, it is the comparison of differences in reactions to relevant and other test questions to a reference distribution that anchors our knowledge about the expected differences in responses to relevant and other questions among deceptive or truthful persons. Ideally, other questions would have the potential to evoke cognitive and emotional activity of similar quality, though perhaps different in magnitude, then the relevant target stimuli. However, it is not necessary that other questions have similar ecological value compared to the relevant stimuli to be a useful and effective basis for statistical comparison. An example of this can be seen in the use of directed
lie-comparison (DLC) questions, for which Blalock, Nelson, Handler & Shaw (2011) provided a summary of the research on their effectiveness (and for which the name DLC should not be taken to imply that response to these questions are actual lies).
Scientific tests as a form of prediction If we want to quantify or improve the accuracy or precision associated with our assessments and conclusions about future events that have not yet occurred – assuming we want to quantify our conclusions now without waiting for the event to occur – then we are once again attempting to quantify a phenomenon that is not amenable to direct observation or measurement. For this we need a test, with which we can make probabilistic conclusions about the future outcome. Tests used in this way can be thought of as a form of scientific prediction. It is not a form of magic or divination. It is a form of probabilistic modeling. An example of the quantification of a future event is the measurement or quantification of risk level for some hazardous event – for which it is implicit that the future event has not yet occurred and therefore cannot be physically quantified or observed. Yet another example, involving the prediction of a future event, will be the quantification of an outcome for an election that has not yet occurred. Both examples – risk outcomes and election outcomes – can involve a future event for which the associated value is binary (e.g., an event has or has not occurred, or an election has been won or not won). At any single point in time, the event has either occurred or has not occurred. We might, at times, want to simply wait to observe the result to achieve a deterministic conclusion. Deterministic observation of an outcome would, of course, obviate any need for testing and quantification. A notable difference between the prediction of risk events and scheduled outcomes is that election outcomes can be expected to occur at a scheduled point in time, at which time it is possible to observe the result. After the scheduled event the outcome is a matter of fact, not probability. Prior to the scheduled event, the outcome can be thought of as a probability, such that there are some factors that are associated with the different possible outcomes. A goal of scientific prediction involves the identification these associated factors so that they can be characterized as random variables and used to develop a predictive test or model. Probabilities associated with the outcomes of scheduled events that have not yet occurred can be thought of as the proportion of outcomes that would occur a certain way, given the random variables that influence the outcome, if it were possible to observe the event over numerous repetitions.
Effectiveness or precision of a test as a predictive model will depend on our ability to correctly understand the random variables related to the possible outcomes. Ultimately, the outcome will be a certainty, and not a probability. Prior to the outcome occurrence, it remains a probability or prediction. When prediction errors occur, their causes can be due either to random variation, or to misunderstanding and mischaracterizing the random variables related to the possible outcomes. Some types of outcomes are expected to occur at an unknown time, or they may not occur at all for very long periods of time. We can think of these outcomes as probabilities. For example: what is the probability that a known criminal offender will re-offend, or what is the probability of an earthquake in Mexico City, or what is the probability of a flood? These events can also be regarded as certainties after they have occurred, and are also subject to some relationship with related factors that are associated with their occurrence. As with other prediction models, identification and characterization of the associated factors is an important objective in the development of risk assessment or risk prediction models. Probabilities associated with risk prediction outcomes can be thought of in terms of frequencies, such that high probability events occur with greater frequency, while low probability events occur with lower frequency. Nearly everything – including events for which our intuition tells us the likelihood is very low – can thought of as a probability. This can, at times, be taken to absurdity.
For example: what is the probability of a zombie horde attack, or what is the probability of a robot apocalypse? For these extreme examples our intuition tells us the probability is either absolute zero or essentially zero, but we can still engage some imagination as to the factors that could become associated with their occurrence. If we expand the period under consideration, then the probabilities associated with rare events can become conceivably greater. For example: what is the probability that an ostrich will fall from the sky? If we expand our dimensions for time and location to the notions of ever and anywhere, we can intuitively understand some non-zero probability associated with an ostrich falling from the sky, along with the kinds of factors that might APA Magazine 2016, 49(6) 90 be associated with its possible occurrence (e.g., emergency ostrich airlift from a flooded ostrich farm). Quantification of future events such as hazards or election outcomes requires that we treat the future outcome in the same manner as any other amorphous phenomena that we may wish to quantify. We treat the future outcome as a probability. Quantification of an outcome is useful only when it is a future outcome – an outcome that has not yet occurred. If information exists, and is available for observation or measurement, then the outcome is not amorphous but is a physical phenomenon. Direct observation or measurement of a future outcome will require that we wait until the future point in time. Until then, if we want to try to predict a future outcome that has not yet occurred we will need to rely on probabilities to describe the amorphous future event. Similarly, observation or measurement of a past event will require that some physical phenomena from the event are available for observation or measurement. If we wish to quantify a past event for which no physical phenomena are available, then we will once again need to rely on probability theory to quantifying the amorphous phenomena. A famous quotation of unknown Danish authorship during the years 1937- 1938 states, [in English] “It is difficult to make predictions, especially about the future.” This simple and humorous quotation reminds us that predictions of all kinds are inherently imperfect, including predications based on scientific test data. Probabilistic conclusions are inherently imperfect. Indeed, they are not expected to be perfect. Probabilistic conclusions are expected only to quantify the margin of uncertainty associated with a conclusion. Statistical predication is an inherently probabilistic and statistical endeavor for which any conclusion is both probably correct and probably incorrect. Conclusions about deception or truth-telling, despite the desire for certainty and infallibility, will be inherently probabilistic and inherently imperfect. Conclusion: scientific polygraph tests as a form of statistical classification Polygraph test results can be thought of a form of prediction that some other evidence exists and can be identified as a basis of evidence to confirm or refute a test result. A simpler and more general way to think about these tests will be as a form of statistical classification. Like other scientific tests, statistical tests intended for classification are not expected to be perfect, infallible or deterministic. Neither are statistical classifications expected to provide the same level of precision as an actual measurement of a physical phenomenon.
Like other probabilistic endeavors, scientific tests intended for classification are expected only to quantify the margin of uncertainty or level of confidence that can be attributed to a conclusion. Most importantly, the method for statistical quantification should be accountable and the results should be reproducible by others. The ultimate measure of effectiveness of a statistical test is not in the achievement of perfection or infallibility, but in the observation of correct and incorrect real-world classifications that conform to our calculated probability estimates. If the basic analytic theory of the polygraph test is incorrect – if no physiological changes are correlated with differences between deception and truth-telling – if all physiological activity in mere random chaos with regard to deception and truth-telling, then humans have virtually no chance of ever known if they are being lied with any precision greater than random chance.
The only way to protect oneself from deception will be to remain cynical and suspicious of all, while trusting no-one. Although perhaps tempting, this will be unrealistic and unsustainable over time. On the other hand, if it is correct that some changes in physiological activity are associated with deception and truthtelling at rates significantly greater than chance, then it is only a matter of time before technologists, engineers, mathematicians, statisticians and data analysts devise some means to increase the availability of useful signal information amid the chaotic noise of other physiological activities and exploit those signals with some new form of scientific credibility assessment or lie detection test. If the polygraph test is ultimately an interrogation and not a scientific test, then measurement theory is of no concern and no consequence to the polygraph profession. But in this case, people will begin to turn to other scientific methodologies when they desire a scientific test for credibility assessment, and the polygraph test may eventually be replaced. On the other hand, if the polygraph test is a scientific test, then it will serve the interests of all for polygraph professionals to become familiar with the basics of measurement theory and the discussion of scientific polygraph test results, including categorical conclusions about deception and truth-telling and conclusions about countermeasures, using the conceptual language of measurement and probability theories. Polygraph conclusions are not physical measurements; they are probability estimates. In the absence of probabilistic thinking applied to the polygraph test, there will be an impulse for some to engage in naïve and unrealistic expectations for deterministic perfection.
There will also be a desire or impulse for some to feign infallibility, due to superior professional wizardry or skill, and this can for a time appear to be an effective marketing strategy. But feigned infallibility will lead to confusion and frustration when it is inevitably observed that testing errors can, and do, occur. A temporary corrective solution to this frustration will be to find fault with the professional, not the test – thereby restoring the false assumption of infallibility, so long as we avoid those less competent wizards less competent experts. Although gratifying for a time, this type of approach is unscientific, and will be unsustainable in the context real-world experience and scientific evidence. Polygraph test result should be understood and described like other scientific test results, using the conceptual language of statistical probabilities. Expression of purportedly scientific conclusions, including conclusions about deception and truth-telling and conclusions about the use of countermeasure, without the use of probability metrics will invite accusation that polygraph is mere subjective pseudoscience cloaked in overconfidence. A scientific approach to polygraph testing will recognize that the task of any test is to quantify a phenomenon probabilistically when direct observation or physical measurement are not possible, and to recognize and make accountable use of the potential for testing error when deciding what value to place upon and how to use or rely upon the test result. Like other scientific tests, polygraph tests are intended to make probabilistic classifications of deception and truth telling in the absence of an ability to directly observe or physically measure the issue of concern. If physical phenomena were available for observation or measurement, then a scientific test would not be needed.
Because deception and truth-telling are amorphous constructs, scientific lie detection and credibility assessment are, ultimately, epistemological concerns that are sometimes the subject of complex and important philosophical questions such as: what does it mean to say that something is true, and what kind of things can be said to be true? Although deeply interesting, these must be the subject of another publication.Read More
Lie Detector Test in Surrey
Yesterday we had a very common case for a couple from Surrey who visited our Kent offices. Throughout the relationship arguments had been started and the usual accusations of sleeping with someone else was directed at the person we were due to test. Unfortunately in this case the person we tested broke down mid argument and admitted to something they didn’t do, once you say something like this its very hard to then take it back and ultimately prove you didn’t cheat on your partner. Fortunately this lady turned to Lie Detectors UK to take a lie detector test to prove to her partner she had been faithful to him, we call these tests ‘Proof of Innocence Tests’ and they are popular.
On arrival at our head office we discuss the case with both in full and work on three suitable questions based on fact, in the pre test we discuss how the polygraph works, including a full walk through of the sensors we use as well as a lot more. The testing stage comprises of four tests, we repeat questions multiple times and take an average for the results which are given on the day both verbally and in report format. An average lie detector test will last two hours. An example of a popular question we use is ‘Since being in a relationship besides your partner have you engaged in sexual intercourse with anyone else?’.
In this particular lie detector test with the couple from Surrey the client we tested was telling the truth and we were pleased to be able to help her prove this to her partner and get their relationship back on track. Its very easy in an argument to say things we do not mean purely to hurt someone or to gain a reaction, in this case what was said in anger caused a huge rift with no other way to prove it wasn’t true but to take a lie detector test.
Lie Detectors UK have our own offices and offer fixed price tests at £399, we are members of the American and UK Polygraph Association and are fully qualified and experienced to help our clients and deliver an accurate result.Read More
ITV have released its duty of care for those participating in TV shows, more below. Shame a series of unfortunate incidents occurred to make this happen.
Following media, public and parliamentary scrutiny of its practices in the wake of the suicide of a participant in British tabloid talk show The Jeremy Kyle Show, UK commercial PSB ITV has outlined its guidelines on protecting participants in programmes made for broadcast on ITV, whether made by its own ITV Studios production house or third party suppliers.
Earlier in 2019, ITV Studios introduced, throughout its content making business, refreshed processes and guidance to manage and support the mental health and well-being of programme participants before, during and after production. The processes and guidance rolled out in ITV Studios were developed with the assistance of Dr Paul Litchfield CBE.
This guidance includes reference to the proposed new Ofcom rules in relation to protecting adult participants.
“The health and safety of everyone who takes part in our programmes is our highest priority, which is why we are sharing our best practice guidelines with producers,” stated Kevin Lygo, ITV’s Director of Television.
“This is not intended to be prescriptive but is draft guidance we are rolling out to all producers working with ITV, so we have a framework for the discussion around what the levels of risk might be and what proportionate processes producers therefore may need to have in place.”
“We and our producers already have comprehensive duty of care processes in place which reflect our knowledge and experience of making shows featuring members of the public. As these programmes have evolved, so have the pressures on those entering the public eye through appearing in our shows, from media and social media interest. To continue to make television that reflects and represents a wide and diverse range of people who want to take part, we need to ensure those people are aware of the implications – both positive and negative – that appearing on TV can lead to, so they can make an informed decision on their participation.
“We believe that television is all the better for the energy, talent and diversity of the people who share their experiences, lives and stories with the nation, and this guidance offers a framework for discussion with producers of how best practice can be achieved in making shows for our network, for the benefit of all.”
“Pact and its members take the welfare and protection of programme participants very seriously, and we welcome ITV sharing its best practice guidelines with producers,” added John McVay, Chief Executive of independent producers trade body PACT.
The updated guidelines set out a framework for assessing welfare and mental health risks for participants, identifying six general factors to be considered.
The guidelines also include information on measures that ITV suggests producers should consider putting in place to address welfare and mental health risks. Any measures or processes should be proportionate to the likely risks, given factors such as the programme format and the individuals concerned, and this continues to be taken into account when discussing programme commissions and considering the method and cost of production.
Where productions have medium or higher risk elements, producers should discuss their participant protection processes with their ITV commissioner and the ITV compliance lawyer or advisor allocated to their programme. This should cover periods of pre-production and casting, the period of production and broadcast of the programme and, where appropriate and proportionate, post production and broadcast. It may also involve the need to engage expert psychological advice and support.
Medium or higher risk productions should have a written risk management plan and processes/protocols in place for protecting the welfare of programme participants, which should be shared with ITV. Regular reporting of risk in programmes and the control measures introduced is a key element of risk reporting within ITV.Read More
- Donald Trump has weighed making his staff take lie detector tests
- President is concerned about leaks coming from his administration
- Most leaks involve news stories he does not view favorable to him
- ‘He wanted to polygraph every employee in the building to unearth who it was who spoke to the press,’ an official said
- Trump has long been paranoid about leaks
- He has complained of ‘snakes’ in the White House
- He has brought in people to try to weed out the leakers
Donald Trump has weighed making his staff take lie detector tests as he has become obsessed with leaks coming out of his White House, it was revealed on Tuesday.
The president and his team have complained about leaks before, particularly when it paints his administration in an unflattering light – such as details of Trump’s calls with foreign leaders, his desire to buy Greenland, and his wish to build a moat for his border wall containing alligators and snakes.
But Trump has gotten so obsessed with the leaks that he has frequently discussed ordering polygraphs of White House staffers after major revelations, four former White House officials told Politico.
Donald Trump has weighed making his staff take lie detector tests
‘He wanted to polygraph every employee in the building to unearth who it was who spoke to the press,’ one of the officials said.
There have been previous indications the White House is concerned about leaks, going back to when Trump first entered the Oval Office in 2017.
And Trump’s frustration with the leakers is often tied to press coverage he considers unfavorable – or fake news.
He’s brought in people before to try and weed out the leakers – although the leaks keep coming.
When Anthony Scaramucci was brought in for his short-lived tenure as White House communications director in July 2017, he vowed to get rid of all the leakers.
‘What I want to do is I want to f***ing kill all the leakers and I want to get the President’s agenda on track so we can succeed for the American people,’ he told The New Yorker in an infamous interview that got him fired after seven days on the job.
In September 2018, after an anonymous White House staffer published the ‘resistance’ memo in The New York Times, Trump was reported to have railed against ‘snakes’ in his administration and to carry a list of staffers he suspected of leaking information.
‘When he was super frustrated about the leaks, he would rail about the ‘snakes’ in the White House,’ a source told Axios at the time.
Trump has demanded to know the whistle-blower’s identity.
Past presidents have also had to deal with leakers.
President Barack Obama and his administration pursued leakers – but those investigations typically had to do with national security matters.
In 2012, Jim Clapper, Obama’s director of national intelligence requested intelligence employees be asked if they shared classified information with the media after several secrets leaked.
The Obama administration also obtained phone records from Associated Press and Fox News reporters as part of their probe into how classified material leaked.
Polygraph use on staff is common in the intelligence arena, where most of the information in the agency is classified and officials can be ordered to take the tests on a moments notice as a condition of their employment.
It’s less common for political staff and appointees.
Lie Detectors UK originally reported on this two years ago but its back in the press this week, contact us today if you need help plugging your own leaks !
Spotting A Liar
Children learn to lie between the ages of 2 and 5, by adulthood most are prolific liars. One recent study goes as far to suggest the average person hears up to 200 lies a day which does sound a little on the high side, I guess it depends on interactions with others. Obviously the majority of lies are classed ‘white lies’ the in-sequential niceties, but a psychologist from Hertfordshire University says most people tell at least one big lie a day. They lie to promote, protect and to avoid hurt, either to themselves or someone else.
Most people are hopeless at spotting a liar with deception in the form of lies, evaluating the 206 scientific studies show people separate the truth from lies just 54% of the time, a shocking result. Yes, there can be tell-tale psychological responses one might recognise in their partner, examples being going red, grinning, looking away but there is no common factor we all exhibit when we are lying.
Fortunately, we have the polygraph, in the U.S an estimated 2.5 million exams are conducted every year. In the UK the Government has been testing sex offenders on probation since 2014 and has managed to put an alarming number back behind bars the program has been that successful. In January 2019 the UK Government announced plans to increase its polygraph program to additionally test domestic abusers on parole.
Whilst there has always been a healthy debate on the use of the polygraph the results speak for themselves. A single issue test conducted by a qualified American Polygraph Examiner is able to achieve accuracy levels of 95%, there is currently nothing on the market in the field of lie detection that can beat this, the latest technology Eye Detect achieves 80% accuracy levels and has its flaws.
Lie Detectors UK provides a UK wide lie detection service and is one of the few companies to have its own offices. All examiners are qualified and experienced and are members of the APA and the UK Polygraph Association. All prices are fixed and transparent, get in touch today and find out just why Lie Detectors UK were voted ‘Best Polygraph Testing Company in 2019’Read More
Lie Detectors UK win another award…
Lie Detectors UK are extremely proud we have won another award this year, Corporate Livewire have announced that we have been successful in winning ‘Lie Detector Service of the Year’ This is the second award we have won this year which is a testament to our examiners and thanks to our clients. We are in the process of opening another office this year in West London due to demand for our services headed up by our female examiner.
The Corporate Livewire Prestige Awards celebrates small and medium-sized enterprises consisting of localised businesses and sole traders that have thrived in their highly competitive community and have proven their success during the past 12 months.
Lie Detectors UK offer a UK wide service with male and female examiners, they were originally setup by Jason Hubble in 2014 and have quickly risen to be one of the UKs top Lie Detection companies and have been involved in many prolific cases. All examiners are American Polygraph Association examiners and members of the UK Polygraph Association, the UKS only professional Association that requires all its members to be fully up to date with their training. Tests are fixed price at £399 at one of their offices and £499 anywhere else in the county, including a home visit providing the client has a suitable distraction free environment to test in. Their head office is in Kent’s county town of Maidstone in a grade 2 listed school house, with car parking and a waiting room for clients, its at this office every two years that the UKPA run Advanced Training for all UK examiners allowing them to stay current in the ever changing field of lie detection. When a potential client calls Lie Detectors UK they will always speak to a qualified examiner who will discuss their case and ensure a lie detector test is suitable.Read More
UK Polygraph Association new appointment
Jason Hubble the chief examiner of Lie Detectors UK has been appointed to the position of Secretary of the UK Polygraph Association, this recognises his hard work and dedication to the profession. The UK Polygraph Association is the original professional polygraph Association in the UK and is the only one that enforces all its members are up to date with their advanced training which is a condition of membership of the American Polygraph Association that has over 2800 members worldwide and is widely recognised as the professions governing body. The UKPA hosts an Advanced training course at Lie Detectors UK offices, the next course is scheduled for June 2020 and will be sold out again.
We always say to clients if you dont use us please make sure you use a qualified and experienced examiner, a list of who can be found on the UKPA website here.
Lie Detectors UK Examiners, always setting the standards in the polygraph industry. Call or email today and discuss your case with a full qualified examiner, fixed pricing.Read More
Manchester Police use Lie Detector
Greater Manchester Police are tackling the growing number of sex offenders in our communities with the use of lie detector tests. There are around 3500 registered sex offenders in the area, officers say the number is growing by 10% per year.
The force told us their specialist officers are responsible for keeping tabs on 65 sex offenders each – which is way over the recommended national guidelines of 50.
Manchester Police use Lie Detector were one of the first forces to introduce polygraphs to help keep tabs on offenders.
DCI Jude Holmes told Granada Reports it’s just one tool they use:
Sex offenders can be made to take polygraphs but the man here today has come in voluntarily. He now lives in the community after spending time in prison, and tells police he will not offend again, but it’s up to officers to determine the truth.
A specialist unit spends hours visiting registered sex offenders in the community, checking devices and monitoring activity. They hope the polygraph can be used to focus attention on those most at risk.
The offender with us when we visit the unit was jailed for sex crimes against children. He is interviewed before the polygraph takes place, this is final opportunity to make any confessions and divulge information that could come up later.
He is then hooked up to various sensors and and the test begins. Questions including sexual contact with children are asked. The offender responds ‘no’ to all of them. The questions are asked again to see how the responses match up. The test itself is over in a matter of minutes.
Assessment by the polygraph examiner reveals he’s telling the truth. This could mean a lowered risk level and in future less contact with police.
The ex-convict here today told Granada Reports thought the tests were “necessary” to “prove you’re not doing anything to harm anybody.”
Whilst Polygraphs can not be used in court and aren’t 100% accurate but GMP say 80% of the ones they’ve conducted so far have led to further intelligence and even arrests.Read More
Polygraph Clears Defendant of Two Death by Auto Charges By Jerry A. Lewis New Jersey State Police (retired)
On October 30, 2010 four teens left a party in a car driven by 17 year old (TM). That vehicle, a Chevrolet Cavalier, owned by TM’s mother, was later involved in a one car accident. The black box revealed that it was travelling over 90 MPH when it left the roadway and struck a tree. Two of the vehicle’s occupants were killed. TM and the other survivor, (GB), fled the scene and were not located until the next morning. Immediately upon being found by police, TM admitted that he was driving the vehicle at the time of the accident. He later gave two additional statements (to an ambulance paramedic and subsequently to a medi-vac flight medic) admitting to being the driver of the vehicle at the time of the accident. Further, he was positively identified by an eye witness as operating the motor vehicle when the four teens left the party. He was charged with two counts of Death by Auto and two counts of Leaving the Scene of an accident involving death. Each charge carried a five to ten year state prison sentence for an adult.
Upon further deliberation and now with the benefit of the presence of his parents, TM attempted to recant his previous confessions when later interviewed by the police at the hospital. The defendant stated that he was in fact not driving the vehicle at the time of the accident. He stated that the surviving passenger GB intimidated him into letting him drive his mother’s car after they left the party. TM claimed that the four teens actually made a stop after leaving the party, at which time GB began operating the motor vehicle. He added that after the accident, while they were eluding capture, GB kept telling him (TM) to say that he was driving. He agreed to say that he was driving due to intimidation and because GB would loan him his jacket to wear as he was extremely cold. GB emphatically denied operating the motor vehicle at any time and none of the evidence supported TM’s claims.
Despite the overwhelming evidence against his client, TM’s defence counsel requested a polygraph examination of his client to add credibility to his claims. The prosecution had a virtually air tight case against TM. It was TM’s car, he was seen driving it leaving the party and he gave three confessions admitting that he was driving at the time of the accident. On June 15, 2011, I tested the client using an event-specific, single-issue polygraph examination at Mr. Russo’s office. I used a three relevant question Utah Comparison Question test format and a Stoelting Computerised Polygraph System.
The charts were rough. TM was shrugging his shoulders and was emotional as he answered the test questions. In an attempt to alleviate these issues, I conducted Silent Answer Tests. After three charts I had an Inconclusive result. As recommended in the Utah format to resolve Inconclusive results, I conducted two additional tests with the same questions. I evaluated the polygraph test data with the Empirical Scoring System utilising the Kircher features. My score for the five charts was +5. This is statistically significant for a Truthful conclusion when the client (TM) answered the relevant questions, Polygraph Clears Defendant.
After receiving my report, the Prosecutor’s Office sent previously untested blood evidence from the steering wheel of the vehicle to the lab for DNA analysis. The test results revealed that the blood on the steering wheel did not belong to the defendant but rather was a direct match for GB. The Prosecutor’s Office subsequently re interviewed GB and he admitted that he was in fact driving the car at the time of the accident. He was arrested and later agreed to a plea deal to serve four years in New Jersey State Prison.
Credit for the successful resolution of this case goes to the defence counsel who believed in his client and utilised the polygraph to assist in his defence, Polygraph Clears Defendant. In addition, the Prosecutor’s Office, even though it had a solid case, made decisions to ensure that they were prosecuting the person responsible. This was a tragic accident that cost two young people their lives but the polygraph was essential in clearing an innocent defendant from almost certain jail time and seeing that justice was done.
Lie Detectors UK are fully qualified APA Examiners and can help with any legal case, always take advice from your solicitor in the first instance. Our prices are here.Read More
Killer requests a Lie Detector Test
We have recently been contacted to provide a Polygraph Test for Paul Mosley, a convicted killer who is currently serving time behind bars.
Lie Detectors UK would like to publicly state we will not be offering our services to Mr Mosley.
Lie Detectors UK, tests from £399 at our own offices or we can come to you. Get in touch today on 0207 859 4960 and discuss your case with a qualified APA Examiner in confidence.