Bizarre Things That Never Made Sense About JFK's Death


It's been over fifty years since the assassination of President John F. Kennedy, and the facts still don't add up. Even though the shooting was caught on camera, so many questions remain unanswered. When accused assassin Lee Harvey Oswald was shot and killed just two days after supposedly taking down JFK, the truth died with him, leaving everyone to speculate about what truly occurred during the President's final days.


The Secret Service Failed To Make A Move



Since the assassination of JFK was caught on several cameras depicting the gruesome murder from many angles, the occurrence has been reviewed and studied by many. As we all know, the Secret Service is most known for their fast reflexes and intimidating demeanor, but during JFK's time in the White House, that had changed. According to Vanity Fair, the President's lax attitude had rubbed off on his staff and, as each year passed with him in office, the Secret Service grew lazier and lazier. Several former members of the Secret Service came forward about that day years later, claiming they were slow to respond to the shots fired since they'd been out partying the night before, and were sleep-deprived and even hungover.

One particular agent, Abraham Bolden, described the details of that day in his book The Echo from Dealey Plaza. He describes how he specifically remembers that after JFK was shot, an agent shouted, "I knew it would happen. I told those playboys that someone was going to get the president killed if they kept acting like they did. Now it's happened." Bolden also discussed what it was like being one of the only African-Americans on the Secret Service team, and the racism he had to deal with. Soon after the book was released in 2008, Bolden was accused and convicted of attempting to sell a secret government file for $50,000 to the defendant in another case. Bolden claimed that he'd been framed for going public about the partying ways of the Secret Service, and how he feels it led to the death of JFK. Framed or not, it's clear in the videos that the everyone nearby, aside from his wife, failed to react until the third and final gunshot was fired.


The Secret Service Stole Kennedy's Body



After JFK was shot, he was rushed to Parkland Memorial Hospital in Texas where he was pronounced dead. Texas state law rightfully stated that the President's body wasn't allowed to leave the hospital until an autopsy had been performed, but the Secret Service felt differently. According to Jacob G. Hornberger, president and founder of the Future of Freedom Foundation, a dispute broke out in the hospital when Dallas medical examiner Dr. Earl Rose attempted to prevent the Secret Service from taking JFK's body. According to Hornberger's article "The First Step In The JFK Cover-Up," there was a lot of shouting and cursing, before the situation forced agents to draw their guns and push their way out of the hospital with JFK's body.

The casket holding JFK's body was then taken on Air Force One to Anders Air Force Base, where the military supposedly conducted the autopsy. Many have speculated that this move was the first of many performed by the government to cover up the assassination of the president. When reports surfaced about exactly what happened at the hospital, the public was led to believe that this was all normal, and that breaking Texas law by practically stealing the president's body was totally consistent with Secret Service training. However, many see this as a precaution taken to ensure that the details of JFK's autopsy weren't leaked to the public, especially if those details prove his assassination was a conspiracy involving our own government.


Oswald Might Not Have Acted Alone



The biggest conspiracy theory surrounding JFK's assassination is that many people feel the evidence shows there might have been more than one shooter. They've speculated that the first shot that hit JFK in the neck came from a different gun than the bullet that hit former Texas Governor John Connally immediately after. Rumors claimed there was a second gunman in the crowd or in another building, though examination of video evidence seems to prove that there was not.

Dale Myers, a computer animator who studied the assassination for over 25 years, created a simulation of the event based on video footage from that day. The simulator allowed him to examine the viewpoint from anyone who witnessed the murder, as well as track the trajectory of each bullet. Myers told ABC News, "…the accuracy of the computer model would be such that you could then plot trajectories, you could take the wounds, the positions of the figures, you could see where the firing sources were from, or not from."

This technology demonstrated that the bullet that first struck JFK in the neck was actually the same bullet that then hit Connally under his armpit. Myers points out that the trajectory from Connally's wound from the first seat of the vehicle lines up with JFK's, and that the two reacted to their injuries simultaneously. Additionally, each bullet's trajectory leads to the sixth floor of the Texas School Book Depository, where Oswald worked. Regardless, many still believe that one of the many conspiracy theories that have been told are closer to the truth.


Oswald Never Confessed



Police arrested Oswald as a suspect, as he was the only employee not present at the Texas School Book Depository after the shooting. They found him in a movie theatre just two hours later, as if nothing had happened. Police questioned the 24-year-old all weekend, but weren't able to get a confession out of him before he was shot and killed by local club owner, Jack Ruby. Even though Ruby had insisted he wasn't involved in the plot to assassinate JFK, some believe he was sent to kill Oswald, to ensure he kept his mouth shut.

As a well-known loner, Oswald was the ideal suspect for this crime, and his connection to communism seemed to provide a motive behind his actions. As a former U.S. Marine, Oswald was stationed in the Soviet Union nearly three years, and it's thought that during this time, became a committed Communist, even marrying a Russian woman and bringing her back to America. During this time in history, Fidel Castro established a Communist government in Cuba and became allied to the Soviet Union. Rumors had been flying of the American government's intent to get involved and kill Castro, creating a possible motive for Oswald's actions. Of course, we will never know for sure what was truly going through his mind that day.


The CIA Was Probably Involved Somehow



Dave Perry is known for his dedication to debunking conspiracy theories surrounding the JFK assassination, and has been studying records surrounding the event since 1976. A particular theory that interests him the most is one claiming that the CIA ordered the assassination, as it's the only theory he has thus far failed to debunk. According to Perry, JFK was fed up with how the CIA was running things, saying "He found out the CIA was trying to kill Castro, which is a fact. So the argument is that the CIA felt Kennedy was going to disband them. And as a result of that, they were the ones that ordered the killing of Kennedy."

According to this theory, Oswald was an agent acting on orders given by the CIA. Just weeks before the assassination, Oswald was in the Russian embassy in Mexico City, possibly acting as a double agent, working for both sides to get rid of JFK. While there's no proof that he was on anyone's payroll, Perry did discover that the former head of the CIA, Allen Dulles, was a member of the task team responsible for the official investigation surrounding JFK's death, which would be convenient if the CIA actually was involved. This task force, of course, came to the conclusion that Oswald worked alone, which fits neatly into this theory.


Johnson's Mistress Spilled The Beans



There's another theory that JFK's vice president, Lyndon B. Johnson, was the one who ordered the assassination, as he had a lot to gain from Kennedy's death. The theory is analyzed in a novel by Roger Stone titled The Man Who Killed Kennedy, basing the theory on the fact that JFK had told his secretary Johnson would be left off the 1964 election ballot, due to several scandals he was involved in. Additionally, it's said in Stone's book that Johnson was the one who convinced JFK to visit Dallas, and he was also the one to suggest he drive through the plaza in a convertible.

It could be a coincidence, however. Stone claims there were other fingerprints found on the sixth floor of the Texas School Book Depository that are consistent with his theory. His novel gives credit for the assassination to notorious hitman Malcolm "Mac" Wallace, believing he was hired by Johnson himself. Wallace's fingerprints were found in the area the gunshots are believed to have come from, which is odd enough on its own.

What truly created suspicion regarding Johnson's involvement was a memoir written by his mistress, Madeleine Duncan Brown. Involved with Johnson for twenty years, she claimed that the night before the assassination, the then-Vice President made a comment alluding to his knowledge of what was planned for the following day. The New York Post referred to Brown's now out-of-print memoir quoting Johnson saying, "After tomorrow, those Kennedy S.O.B.'s will never embarrass me again." It certainly sounds like Johnson was involved in some way or another.


Mary Moorman Captures The Event On Camera



One spectator, Mary Moorman, captured the exact moment the first shot hit JFK on film, resulting in one of the most iconic photos in history. In an interview with USA Today, Moorman describes the scene vividly. She said she was standing only ten or twelve feet from the President, close enough so she could clearly hear the First Lady shouting that her husband had been shot. Her son, who had school that day, made her promise to take photos for him, a small request that she could have never expected would have led to her fame.

Over time, Moorman's photos became the subject of two controversies. One claims that, of the five photos Moorman took that day, one has since gone missing. She turned over all the photos she took to the Secret Service, though she allegedly didn't get them all back. Of course, the one that is rumored to be missing is one with that infamous sixth-story window depicted in the background, a photo that might have provided a clue as to who actually shot the president.


The Badge Man



When Moorman's other photos were published, they became a huge piece of evidence for those obsessed with the JFK mystery. In 1982 a researcher named Gary Mack noticed a figure in the background of one of Moorman's photos, now dubbed The Badge Man. With the help of photo technician Jack White, the photo was enhanced and seems to reveal a man in uniform, similar to those worn by Dallas police officers at the time. Standing in the grassy knoll that many suspect was the direction of the fatal shot, the figure seems to be in a "firing stance," according to these experts.

Labeled "Badge Man," due to the fact that his badge and uniform can be made out in the photo, he seems to have his face obscured by a cloud of smoke that believers speculate could have been produced from a firearm. While other researches have also enhanced the photo and came up with the same results, some eyewitnesses claim there was no one standing there that day. Skeptics claim that the figure is actually an illusion, made up of the building's corner and other elements in the area.


How To Speed Up Your Internet Connection

Go to start menu

Search programs and files and type in gpedit.msc and click enter


Or click on run

Enter gpedit.msc and then click okay


Now double click administrative templates

Then double click network


Then open QoS packet scheduler

Then goto limit reservable bandwidth


Click on the enabled setting

And then set the limit at 0


And click apply

And then okay

Y2020 Computer Bug/Crash Massive Worldwide Global Disaster

They Got The Date Wrong It's 2020

The projected date has been revised to 2020.

Leading computer experts Steve Combloskey and Jerry Jamely Jr. having studied this for years have come up with a new date January 2020. Their conclusion was based on new set of algorithms involving 7 tech sector areas set for explosive growth in the next few years: wireless innovation, the internet of things, smarter cities, smarter cars, data centers, security, virtual and augmented reality, and healthcare. They're calling it the Y2020 Computer Bug/Crash.

Based on their studies they've published a short paper and also a book, "Algorithms And Formulas Predicting A 2020 Computer Crash".

The original Y2K bug, also called Year 2000 bug or Millennium Bug, was to be a problem in the coding of computerized systems that was projected to create havoc in computers and computer networks around the world at the beginning of the year 2000 (in metric measurements K stands for thousand). After more than a year of international alarm, feverish preparations, and programming corrections, few major failures occurred in the transition from December 31, 1999, to January 1, 2000.

Until the 1990s, many computer programs (especially those written in the early days of computers) were designed to abbreviate four-digit years as two digits in order to save memory space. These computers could recognize “98” as “1998” but would be unable to recognize “00” as “2000,” perhaps interpreting it to mean 1900. Many feared that when the clocks struck midnight on January 1, 2000, many affected computers would be using an incorrect date and thus fail to operate properly unless the computers’ software was repaired or replaced before that date. Other computer programs that projected budgets or debts into the future could begin malfunctioning in 1999 when they made projections into 2000. In addition, some computer software did not take into account that the year 2000 was a leap year. And even before the dawn of 2000, it was feared that some computers might fail on September 9, 1999 (9/9/99), because early programmers often used a series of 9s to indicate the end of a program.

It was feared that such a misreading would lead to software and hardware failures in computers used in such important areas as banking, utilities systems, government records, and so on, with the potential for widespread chaos on and following January 1, 2000. Mainframe computers, including those typically used to run insurance companies and banks, were thought to be subject to the most serious Y2K problems, but even newer systems that used networks of desktop computers were considered vulnerable.

The Y2K problem was not limited to computers running conventional software, however. Many devices containing computer chips, ranging from elevators to temperature-control systems in commercial buildings to medical equipment, were believed to be at risk, which necessitated the checking of these “embedded systems” for sensitivity to calendar dates.

In the United States, business and government technology teams worked feverishly with a goal of checking systems and fixing software before the end of December 1999. Although some industries were well on the way to solving the Y2K problem, most experts feared that the federal government and state and local governments were lagging behind. A Y2K preparedness survey commissioned in late 1998 by Cap Gemini America, a New York computer industry consulting firm, showed that among 13 economic sectors studied in the United States, government was the least ready for Y2K. (Rated highest for preparedness was the software industry.)

In an effort to encourage companies to share critical information about Y2K, U.S. Pres. Bill Clinton in October 1998 signed the Year 2000 Information and Readiness Disclosure Act. The law was designed to encourage American companies to share Y2K data by offering them limited liability protection for sharing information about Y2K products, methods, and best practices.

In western Europe the European Commission issued a report warning that efforts to solve Y2K in many European Union member countries were insufficient, particularly in terms of the cross-border cooperation needed to be ready by 2000. The British government announced that its armed forces would be prepared in time and would provide assistance to local police if utilities, transportation systems, or emergency services failed.

Many other countries, notably Asian countries suffering at that time from an ongoing economic crisis as well as small or geographically isolated countries, were thought to be less well prepared. It was uncertain how this would affect the tightly integrated world economy and physical infrastructure. In mid-December 1998 the UN convened its first international conference on Y2K in an attempt to share information and crisis-management efforts and established the International Y2K Cooperation Center, based in Washington, D.C.

An estimated $300 billion was spent (almost half in the United States) to upgrade computers and application programs to be Y2K-compliant. As the first day of January 2000 dawned and it became apparent that computerized systems were intact, reports of relief filled the news media. These were followed by accusations that the likely incidence of failure had been greatly exaggerated from the beginning. Those who had worked in Y2K-compliance efforts insisted that the threat had been real. They maintained that the continued viability of computerized systems was proof that the collective effort had succeeded. In following years, some analysts pointed out that programming upgrades that had been part of the Y2K-compliance campaign had improved computer systems and that the benefits of these improvements would continue to be seen for some time to come.

In regard to the new revised date of Y2020 other leading scientists Tim Mcgrafy and Marty Stevenson have concluded that they not only back the new date but have submitted articles to be included in the last chapters of, "Algorithms And Formulas Predicting A 2020 Computer Crash".













The satire contained in this article and the fictional nature of it's content – even if based on real people and however similar to real events, is solely for entertainment.

The Difference Between Virtual And Augmented Reality

Virtual and augmented reality seem to be on everybody’s lips nowadays, both promising to revamp the tech scene and change the way consumers interact in the digital space. Despite the hype and media attention, the two often get confused as some people use the terms interchangeably. While there are many similarities between virtual reality (VR) and augmented reality (AR), the two are definitely distinguishable. 


What’s Virtual Reality?



Virtual reality is a computer simulated reality in which a user can interact with replicated real or imaginary environments. The experience is totally immersive by means of visual, auditive and haptic (touch) stimulation so the constructed reality is almost indistinguishable from the real deal. You’re completely inside it.


Marked by clunky beginnings, the idea of an alternate simulated reality took off in the late ’80s and early ’90s, a time when personal computer development exploded and a lot of people became excited about what technology had to offer. These attempts, like the disastrous Nintendo Virtual Boy which shut down after only one year, were marked by failure after failure, so everyone seemed to lose faith in VR.


Then came Palmer Luckey, who is undoubtedly the father of contemporary VR thanks to his Oculus Rift. Luckey built his first prototype in 2011, when he was barely 18, and quickly raised $2 million with Kickstarter. In 2014, Facebook bought Oculus Rift for $2 billion. Other popular VR headsets include Samsung Gear VR or Google Cardboard.



What’s Augmented Reality?



While VR completely immerses the user in a simulated reality, AR blends the virtual and real. Like VR, an AR experience typically involves some sort of goggles through which you can view a physical reality whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. In augmented reality, the real and the non-real or virtual can be easily told apart.


Wearing Google Glass — the biggest effort a company ever made to bring AR to mass consumers — you can walk through a conference hall and see things ‘pop to life’ around the booths, such as animated 3D graphics of an architecture model if the technology is supported. The goggles aren’t even necessary since you can do this via mobile apps which use a smartphone’s or tablet’s camera to scan the environment while augmented elements will show on the display. There are other creative means, as well.


Unfortunately, Google Glass didn’t take off and the company discontinued the product in 2015. Instead, AR apps on smartphones are much more popular, possibly because they’re less creepy than a pair of glasses with cameras.



Perhaps the most revealing example of AR is Pokemon Go, a viral phenomenon which amassing more than 100 million downloads in a few weeks. In Pokemon Go, you use your smartphone to find pokemons lurking in your vicinity with the help of a map that’s build based on your real-life GPS signal. 


To catch the pokemon you have to throw a pokeball at it by swiping on your mobile’s screen and when you toggle AR on, you can see the pokemon with the real world in the background.

Despite the hype, Pokemon GO has a minimal and basic AR interface. Some more revealing examples include:


Sky Map — a mobile app that lets you point your phone towards the sky and ‘see’ all the constellations you’re facing in relation to your position.


Word Lens — A Google app that allows you to point your phone to a sign and have it translated in your target language, instantly.


Project Tango – another Google project which aims to create a sensor-laden smartphone that can map the real world and project an accurate 3D picture of it.


“I’m excited about Augmented Reality because unlike Virtual Reality which closes the world out, AR allows individuals to be present in the world but hopefully allows an improvement on what’s happening presently… That has resonance.”

Tim Cook, CEO, Apple


Virtual Reality VS Augmented Reality



Both technologies enrich the experience of a user by offering deeper layers of interactions; have the potential to transform how people engage with technology. Entertainment, engineering or medicine are just a couple of sectors where the two technologies might have a lasting impact.


However, the two stand apart because:


Virtual reality creates a completely new environment which is completely computer generated.

Augmented reality, on the other hand, enhances experiences through digital means that offer a new layer of interaction with reality, but does not seek to replace it.


AR offers a limited field of view, while VR is totally immersive.


Another way to look at it is once you strap those VR goggles, you’re essentially disconnected from the outside world. Unlike VR, an AR user is constantly aware of the physical surroundings while actively engaged with simulated ones.

virtual reality typically requires a headed mount such as the Oculus Rift goggles while augmented reality is far less demanding — you just need a smartphone or tablet.

What’s sure is we’re just barely scratching the surface of what AR and VR can do.  BCC Research estimated the global market for both virtual reality and AR will reach more than $105 billion by 2020, up from a mere $8 billion last year.


For a better explanation you can always use a cinematic analogy. For instance, the world of The Matrix corresponds to virtual reality while augment reality is akin to The Terminator. Another way to look at this is to think about scuba diving versus going to the aquarium. In virtual reality, you can swim with sharks and with augment reality you can have shark pop out of your business card through the lens of a smartphone. Each has its own pros and cons, so you be the judge which of the two is better.



What’s Mixed Reality?



A third distinct medium has surfaced: mixed reality (MR).


What MR does is mix the best of augmented and virtual reality to create a … hybrid reality. Mixed reality overlays synthetic content over the real world. If that sounds familiar, it’s because MR is very similar to AR. The key difference here is that in MR the virtual content and the real-world content are able to react to one another in real time.  The interaction is facilitated by tools you’d normally see in VR like special goggles and motion sensors used to control.

Building Hoover Dam

Officials boldly ride in one of the penstock pipes of the soon-to-be-completed Hoover Dam (1935)

When it was finally finished in 1936, the 60-story Hoover Dam was the highest dam in the world. That distinction now belongs to Jinping-I Dam in China. Eighty years later, however, Hoover Dam is not only still operational generating 3.6 TWh annually and a tourist attraction where millions flock every year, it’s also a remarkable engineering effort that serves as an inspiration for great infrastructure works.

This huge dam was built in only five years, from 1931 to 1936, with 5,251 people employed at peak construction. Hoover Dam’s story began much earlier, though. A famous engineer at the time from the Bureau of Reclamation named Arthur Powell Davis first outlined the vision for a high dam erected in Boulder Canyon, Colorado back in 1902. His indications and initial engineering report were put to good use when detailed plans for Hoover Dam began in 1921.

An inspection party near the proposed site of the dam in the Black Canyon on the Colorado River (1928)

Herbert Hoover, the 31st president of the United States and the man the dam was named after, played a crucial role in turning Davis’ vision into reality. In 1921, at the time a secretary of commerce, Hoover became convinced that a dam is of the utmost importance in Boulder Canyon. Such infrastructure would provide much-needed flood control in the area protecting downstream farming communities that got battered each year when snow from the Rocky Mountains melted and spewed into the Colorado River.

A surveyor signals to colleagues during the construction of the dam (1932)

The dam would also provide enough water to irrigate farming in the desert and supply southern Californian communities like Los Angeles with potable water. That’s, of course, in addition to the electricity it would generate. In 2015, Hoover Dam, which has a 2,000 megawatts of capacity, served the annual electrical needs of nearly 8 million people in Arizona, southern California, and southern Nevada.

Dynamite is detonated in the canyon to make room for the new dam (1933)

Once Hoover became president in 1929, the Boulder Canyon dam became a national priority. In the same year, the president signed the Colorado River Compact into law, also known as the ‘Law of the River’. It defined the relationship between the upper basin states, where most of the river’s water supply originates, and the lower basin states, where most of the water demands were developing. Hoover would later claim this was “the most extensive action ever taken by a group of states under the provisions of the Constitution permitting compacts between states”.

To make sure the canyon walls were solid enough to support the arch design, so-called ‘high scalers’ were employed to hammer away anything loose. Falling rocks were a serious hazard so the workers dripped their hats in tar and left them out to dry.

Building Hoover Dam was a gargantuan task. Before construction of the dam itself could begin, the Colorado River had to be diverted. Four diversion tunnels were carved through canyon walls to divert river flow around the dam site. Then, the riverbed had to dredged of deep silt and sediments to expose the bedrock formation.

This bucket holds 18 tons of concrete (1934)

To stabilize Hoover Dam, its base required 230 gigantic blocks of concrete. Then, columns were linked together like a giant Lego set with alternating vertical and horizontal placements. By the time concrete pouring ceased on May 29, 1935, some 2,480,000 m3 of concrete were used, not counting the 850,000 m3 employed in the power plant and other works. Overall, enough concrete was poured to pave a two-lane highway between San Francisco and New York!



Construction works carried on day and night (1935)

All of that concrete would have taken 100 years to cool and cure properly were it not for the intervention of the Hoover Dam engineers. Some 528 miles worth of one-inch steel pipes were embedded through the interconnecting concrete blocks through which ice cold water was circulated. The water was supplied by the construction site’s own ammonia refrigeration plant which at peak capacity could produce the equivalent of a giant 1,000-pound ice block every day.

Hoover Dam is an arch-gravity design which dissipates that pressure into the canyon walls equally on the Arizona and Nevada side. Water exerts as much as 45,000 pounds per square foot of pressure at the base of Hoover Dam but this immense crushing force is transferred to the canyon walls.

The architect of Hoover Dam was Gordon B. Kaufmann, known for his design of the Los Angeles Times Building. Kaufmann not only took structural design in consideration but also applied an elegant Art Deco style to the entire project.

Engineering students pose for a picture atop one of the 2 million-pound hydroelectric generators for the dam at the General Electric factory in Schenectady, New York (1935)

A widely circulated urban myth says many dead bodies were buried inside the dam’s concrete. That’s certainly not true although way too many people died building Hoover Dam by today’s standards. Officially, there are 112 deaths associated with the construction of Hoover Dam, including three workers who committed suicide on site, and a visitor who died accidentally falling off the massively high structure.

President Franklin D. Roosevelt tours the dam (1935)

The final block of concrete was poured and topped off at 726 feet above the canyon floor in 1935. On September 30, a crowd of 20,000 people watched President Franklin Roosevelt commemorate the magnificent structure’s completion. The dam was designated a National Historic Landmark in 1985 and one of America’s Seven Modern Civil Engineering Wonders in 1994. It receives some 7 million visitors annually, while Lake Mead, the world’s largest reservoir, hosts another 10 million as a popular recreation area.

Hoover dam after years of operation (1940)

Invisibility: another sci-fi dream come true?


Recently, attempts to make “cloaking technology” possible have reached a great level, with major break throws. Among mentionable achievements, more notable are the work of Oleg Gadomsky, a Russian professor that managed to redirect light around objects and that of the people from the University of Maryland, who reported the successful cloaking of small 2D objects from all light waves.

Now, Scientists have created two new types of materials that can bend light the wrong way, creating the first step toward an invisibility cloaking device. The people behind this major achievement are the researchers at the Nanoscale Science and Engineering Center at the University of California, Berkeley, being the first to manage to cloak 3D materials.

One approach uses a type of fishnet of metal layers to reverse the direction of light, while another uses tiny silver wires, both at the nanoscale level. Both are so-called metamaterials — artificially engineered structures that have properties not seen in nature, such as negative refractive index.

The materials were developed by two separate teams, both under the leadership of Xiang Zhang of the Nanoscale Science and Engineering Center at the University of California, Berkeley with U.S. government funding. One team reported its findings in the journal Science and the other in the journal Nature.

Don’t treat the issue too seriously though. We’re a long way from witnessing the presence of invisible people on the street or the cloaking of whole buildings. Far from it. Here’s what Jason Valentine, one of the members of the projects, had to say.

“We are not actually cloaking anything,” Valentine said in a telephone interview. “I don’t think we have to worry about invisible people walking around any time soon. To be honest, we are just at the beginning of doing anything like that.”Valentine’s team made a material that affects light near the visible spectrum, in a region used in fiber optics.

“In naturally occurring material, the index of refraction, a measure of how light bends in a medium, is positive,” he said.

“When you see a fish in the water, the fish will appear to be in front of the position it really is. Or if you put a stick in the water, the stick seems to bend away from you.”

On the left is the conceptual rendered “fishnet” design for the second cloaking material. The actual produced material is seen on the right in an electron microscope picture. It is capable of bending light backwards.

What’s a negative index of refraction, you ask?

“Instead of the fish appearing to be slightly ahead of where it is in the water, it would actually appear to be above the water’s surface,” Valentine said. “It’s kind of weird.”

For a metamaterial to produce negative refraction, it must have a structural array smaller than the wavelength of the electromagnetic radiation being used. Some groups managed it with very thin layers, virtually only one atom thick, but these materials were not practical to work with and absorbed a great deal of the light directed at it.

“What we have done is taken that material and made it much thicker,” Valentine said.

His team, whose work is reported in Nature, used stacked silver and metal dielectric layers stacked on top of each other and then punched through with holes. “We call it a fishnet,” Valentine said.

Immediate applications might be superior optical devices, Valentine said — perhaps a microscope that could see a living virus.

“However, cloaking may be something that this material could be used for in the future,” he said. “You’d have to wrap whatever you wanted to cloak in the material. It would just send light around. By sending light around the object that is to be cloaked, you don’t see it.”

The Wright Brothers And Their Historic Flights

Orville and Wilbur Wright are credited as the first men who built an aircraft capable of manned controlled flight. The first manned flight by airplane (powered, controlled and heavier than air) occurred on  December 17, 1903,  when Orville flew at 120 feet (37 m) over the ground for 12 seconds, at a speed of only 6.8 miles per hour (10.9 km/h). 

The Wright brothers worked fundamentally different from other manned flight pioneers of their time. While others concentrated on fitting stronger engines and making more tests, Orville and Wilbur preferred to tackle on aerodynamics instead. The brothers built their own wind tunnel and extensively carried out aerodynamic tests. This eventually lead to the advent of the three-axis control system: wing-warping for roll (lateral motion), forward elevator for pitch (up and down) and rear rudder for yaw (side to side). This was indispensable for the pilot to have control and thus both better flight performance and avoid accidents which were so often at the time.

Some scholars agree that the 1902 glider was the most revolutionary aircraft ever created and the real embodiment of the genius of Orville and Wilbur Wright. Although the addition of a power plant to their 1903 Flyer resulted in their famous first flight, some scholars regard that improvement as a noteworthy addition to something that was truly a work of genius – the 1902 glider.

Here's some amazing photographs featuring the Wright brothers and their creations – various historical flights like the very first take off at Kitty Hawk, model gliders including 1902 and 1903 versions, mid-air shots and other fantastic vintage relics that tell of a time just a century ago when people daring to fly were labeled as mad.


Here's a collectible photograph of Crumpled glider wrecked by the wind on Hill of the Wreck (named after a shipwreck). It was created in 1900.

 Here's a photo of Left side view of glider flying as a kite, in level flight, Kitty Hawk, North Carolina.

Orville at left wing end of upended glider, bottom view; Kitty Hawk, North Carolina. It was made in 1901.

An impressive image of Rear view of flight 46, Orville turning to the left; Huffman Prairie, Dayton, Ohio.

Here's an aesthetic picture of A glide with the double-rudder machine moving to the left, north slope of Big Kill Devil Hill. It was created in 1902.

Here we present a stunning image of Katharine Wright, wearing a leather jacket, cap, and goggles, aboard the Wright Model HS airplane with Orville. It was taken in 1915.

Wilbur and Orville assembling the 1903 machine in the new camp building at Kill Devil Hills. Kitty Hawk, North Carolina. It was taken in 1903.

Here's an old picture of First flight, 120 feet in 12 seconds, 10:35 a.m.; Kitty Hawk, North Carolina. It was created in 1903.Orville Wright is at the controls of the machine, lying prone on the lower wing with hips in the cradle which operated the wing-warping mechanism. Wilbur Wright running alongside to balance the machine, has just released his hold on the forward upright of the right wing. The starting rail, the wing-rest, a coil box, and other items needed for flight preparation are visible behind the machine.

You are viewing an impressive image of Wilbur Wright in prone position in damaged flying machine on ground after unsuccessful trial of December 14, 1903; Kitty Hawk, North Carolina. It was taken in 1903.

Wilbur in motion at left holding one end of glider (rebuilt with single vertical rudder), Orville lying prone in machine, and Dan Tate at right; Kitty Hawk, North Carolina. It was taken in 1902.

Wright Brothers Glider in mid flight. It was made in 1911.

Avatar Therapy Shows Great Promise In Silencing Voices In Schizophrenics


There's been a great deal written about schizophrenia in the past, mostly reports about new theories pertaining to the development of the disease or new types of pharmacological treatments. A great deal of interest and efforts are being invested in battling this severe psychological disorder, since it affects so many people (as many as 1 in 100) and its effects make living a normal life nearly impossible. A new alternate therapy was recently reported, with great promise that it might help countless people suffering from the disease, that assigns  avatars to the patients’ tormenting voices. Results so far are looking great, and large scale implementation is scheduled to run shortly.

The most common symptoms of schizophrenia  are delusions (false beliefs) and auditory hallucinations (hearing voices). The latter are simply devastating, making it impossible for the diseased to concentrate, work or have healthy relationships. To silence voices in peoples head, existing therapy relies on medication, “talking therapy” or a combination of both. Even with the most effective anti-psychotic medication, around one in four people with schizophrenia continue to suffer from persecutory auditory hallucinations, severely impairing their ability to concentrate.

Researchers at King’s College London have developed a novel system that battles the voices inside the schizophrenic diseased individual by creating avatars of these voices so that they may be confronted more easily. Through the computer-based system, people design  computer-based avatar, by choosing the face and voice of the entity they believe is talking to them. The computer then synchronises the avatar’s lips with its speech, enabling a therapist to speak to the patient through the avatar in real time. The therapist encourages the patient to oppose the voice and gradually teaches them to take control of their hallucinations.

“Auditory hallucinations are a very distressing experience that can be extremely difficult to treat successfully, blighting patients’ lives for many years.  There will be a rigorous randomised study of this intriguing new therapy with 142 people who have experienced distressing voices for many years.

“The beauty of the therapy is its simplicity and brevity. Most other psychological therapies for these conditions are costly and take many months to deliver. If we show that this treatment is effective, we expect it could be widely available in the UK within just a couple of years as the basic technology is well developed and many mental health professionals already have the basic therapy skills that are needed to deliver it.”

An early pilot therapy was conducted with 16 patients using this novel method, most of whom reported dramatic improvements in suppressing the nagging voices inside their heads. Three of the patients stopped hearing voices completely after experiencing  auditory hallucinations for 16, 13 and 3.5 years, respectively.

Professor Julian Leff,  Emeritus Professor, UCL Mental Health Sciences, said: “Even though patients interact with the avatar as though it was a real person, because they have created it, they know that it cannot harm them, as opposed to the voices, which often threaten to kill or harm them and their family. As a result the therapy helps patients gain the confidence and courage to confront the avatar, and their persecutor."

“We’ve found that this helps them to recognise that the voices originate within their own mind and reinforces their control over the hallucinations.”

Its really inspiring to hear of a therapy with such promising results that doesn’t rely on medication, which should be the last course of action. The larger-scale study at the IoP will begin enrolling the first patients. The team are currently training the therapists and research staff to deliver the avatar therapy and finalising the study set-up. The first results of this larger study are expected soon.