"The world is a big place, but it has become smaller
with the advent of technologies that put people from
all over the world in the palm of their hands."

Monday, 21 January 2019

Drones you can control with your mind..

At the Global Education and Skills Forum in Dubai last week, Emotiv, an Australian-based company were showing off one of their highly impressive innovations.

They have developed a headset which monitors your brainwaves and can be used to control electronic devices, including drones.

We have previously written about Samsung’s plans for flying screens that follow you around and are controlled through hand gestures and eye movements. This invention, however, takes things to a whole new level.

Wow! Are you telling me people can literally move objects using the power of their minds?

Yes indeed. We are literally at a point in society where you can become a poor man’s version of Professor X for just a few hundred bucks.

It’s an exciting new horizon and offers a whole range of possibilities, particularly for people with disabilities. Emotiv have designed many of their technolgies with disabled people in mind. Think for a second of the recently departed Stephen Hawking. If it were not for a sophisticated computer, he would have been severely limited in his ability to communicate. The world would never have come to know his brilliant insights on the world, life and the universe. If people’s bodies are not able to perform as they would wish them to, why not augment our brains using technology to increase our scope of possibilities?

How does it work?

First you place their headset on. Fact: wearing the headset will make you look like a bit of goober. I’d argue that’s a price well-worth paying given the incredible stuff you can do with it.

The headset is an electroencephalogram (EEG) device which picks up your brain’s electrical impulses through sensors on your scalp. It records them on a computer and translates those thought patterns into flight instructions for a small drone.

Basically – you imagine the drone lifting off the ground and voila – it does. It doesn’t seem from the videos that you have much control aside from take0ff and landing but, being honest, that’s already pretty frigging cool!

As a matter of fact – these headsets are not primarily for controlling drones. They have been developed to measure individual’s brain functions including concentration levels and stress and can be worn by users who have an interest in learning more about their brain or becoming more productive. There are apps which can show your brain

How AI Is Impacting Industries Worldwide

A common goal among companies in today’s data-driven world is to become smarter—to know where the market opportunities lie, where supply chain logjams are and where process improvements can be found. Data science has been the fuel behind this trend, and now data science is itself becoming smarter. Thanks to astonishing advancements in artificial intelligence (AI) and its sub-segments machine learning and deep learning, companies are achieving new levels of efficiency in data analysis that impact their entire business. The rising tide of AI adoption across industries will drive significant growth in the next decade, with AI software revenue set to reach almost $90 billion by 2025. AI’s presence is tantalizing to data scientists and business managers alike who seek to let machines do the number crunching to make the business smarter on a holistic level.

AI on the Fast Track

A leading indicator of a market segment’s growth path can usually be found by following the money trail. Investors and venture capital (VC) firms are always looking for big growth opportunities, and they are finding one now in the AI business. Forbes recently reported that there has been a 14x increase in the number of active AI startups since 2000, and investment into these startups by VC firms has increased 6x in that period. Meanwhile, companies that both build and utilize AI applications are on a similar growth path, with jobs requiring AI skills increasing 4.5x since 2013.

IT Is a Big AI Beneficiary

It should come as no surprise that the department that deals full-time with data—namely the IT organization—is perhaps the biggest beneficiary of AI’s capabilities. A Harvard Business Review study reports that between 34 and 44 percent of global companies they surveyed are using AI to help resolve employee technical support issues (imagine a smart response system to streamline common questions and troubleshoot others), automate internal system enhancements (machine codes can be used to calculate where bottlenecks can be fixed), and ensure that employees only use technology from approved vendors (picture a smart authorization engine that keeps up with daily updates and knows vendor subsidiaries and partners).

But AI Is Cross-enterprise Too

Where else is AI finding a home? Among the most common examples of AI in the enterprise are image recognition and tagging, patient data processing, localization and mapping, predictive maintenance, predicting and thwarting security threats, and intelligent recruitment and HR management techniques. But perhaps the most active adoption is being seen in the marketing and sales operation, where intelligent use of data and the ability to learn from human interactions can produce big financial benefits. In a Statista worldwide survey, 87 percent of current AI adopters said they were using or considering using AI for sales forecasting and for improving e-mail marketing. While sales forecasting is often automated by technology to a point, it can be vastly improved with an AI agent that monitors and reacts to customer interactions and shifting market patterns. Email marketers similarly can create the sense of one-to-one marketing through more intelligent targeting and content creation for various audiences. 

The bottom line is also important. McKinsey found that companies who benefit from AI initiatives and have invested in infrastructure to support its scale achieve three to 15 percentage point higher profit margin. Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption.

Examples of Companies Taking the Lead in AI Adoption

Here are a few examples of how specific companies in various industries are leveraging AI in their businesses:

According to the McKinsey study, tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90 percent of this spent on R&D and deployment, and 10 percent on AI acquisitions. The current rate of AI investment is 3x the external investment growth since 2013. 

Netflix has also achieved impressive results from the AI algorithm it uses to personalize recommendations to its 100 million subscribers worldwide, improving search results and avoiding canceled subscriptions from frustrated customers who couldn’t find what they wanted (with a potential impact of $1B annually). 
Financial data specialist Bloomberg uses techniques like computer vision and natural language processing to improve the breadth of information available through their ubiquitous terminals that financial staff use to access market information. Users can use natural language in queries instead of specialized technical commands, which is analyzed and executed by AI. 

Uber has a core team providing pre-packaged machine learning algorithms 'as-a-service' to its team of mobile app developers, map experts, and autonomous driving teams. These capabilities are used to better predict traveling habits and improve maps using computer vision, and to create algorithms for its autonomous vehicles. 

And Royal Bank of Scotland recently launched a natural language processing AI bot that will answer its banking customer questions and perform simple banking tasks such as money transfers, with the goal to make digital customer support as powerful as face-to-face interaction. 

AI and machine learning are revolutionizing the way companies access and process data to become smarter and more efficient organizations. And IT and data science teams are gearing up for the immense benefits of AI in their enterprises

Monday, 14 January 2019

Digital Transformation

Digital Transformation

Change is a constant in business, and the companies that have outperformed their competitors over the years have steadily kept pace with changes in the economic landscape, technology innovation, and the latest business practices. And now that the global economy has stormed into the digital era with gusto, companies are on the cusp of reaping tremendous new benefits—thanks to their adoption of digital transformation. Digital transformation isn’t just creating one-off efficiencies in various departments. It is disrupting entire business operations that span across back-office, front-office and customer-facing processes. A full 87 percent of companies believe that digital transformation is a competitive opportunity.

Digital Transformation Creates Value

Digital tools and processes create a new foundation on which businesses can operate. They connect companies to customers, internal business units with each other, and employees with new opportunities to meet their career objectives. Digital supply networks are being built and rebuilt to improve efficiencies and create value for every business. We can already see companies benefiting from the digital transformation in multiple ways. According to the State of Digital Transformation Report from the Altimeter Group, 41 percent of companies that digitally transform customer experience increased market share, 37 percent see increased customer engagement on digital channels, 37 percent have more positive employee morale, 32 percent have greater web and mobile engagement, and 30 percent experience increased revenue. And since digital transformation touches so many areas of the business, benefits begin to snowball as the entire organization comes on board.

Customers Love It

Digital transformation impacts the customer experience in a big way. Today’s demanding consumers can easily access product and services data online, see how others are singing your praises on social media, identify solutions tailored for their business needs, and purchase and service products easily and in an affordable manner.  Improving the customer experience is one of the top three digital transformation drivers for mature businesses (along with increasing the speed of innovation and improving time to market), compared to less mature businesses being focused on cost reduction and profitability improvements. It’s all about the customer journey, and digital transformation makes it a more enjoyable ride.

Companies can use multiple digital tools and techniques to improve customer satisfaction. Big data and analytics make it easier to understand consumer behavior and target solutions accordingly, and digital marketing and digital selling teams are both keenly aware of how consumers respond to personalized content and social media-enabled customer service. Digital transformation revolutionizes the way customers interact with business and keeps them coming back for the experience.

Embrace the Edge of Technology Innovation

A new generation of smart technologies is having a tremendous influence on how companies adopt digital transformation, and they are preparing for it financially. The budget for digital transformation related technologies at the average company is expected to grow to 28 percent by 2018, up from 18 percent today. And the types of technologies involved are not only intriguing in what the do, but they are also geared to provide tangible benefits. One of Gartner’s predictions for 2018, for example, is that companies investing in the Internet of Things (IoT)-based operational sensing and cognitive-based situational awareness will see 30 percent improvements in the cycle times. With IoT, everything is connected, from consumer devices to data networks that evaluate dynamic data from the consumer edge to provide the best analysis and service in real-time.

Smart machines, artificial intelligence (AI) and intelligent automation are also making a big splash as companies digitally transform. Analytics are routinely built into machines and devices to gauge and adapt to varying consumer needs and requests (think Echo and Siri), and intelligent automation (one that combines AI analytics with automated systems) is becoming the norm as companies seek greater efficiencies and cost-reduction tactics. In fact, according to Accenture, 92 percent of businesses say that intelligent automation will be put to wider use within their company during the next 12 months.

Thursday, 10 January 2019

Electronics of the future: A new energy-efficient mechanism using the Rashba effect

Scientists have proposed new quasi-1D materials for potential spintronic applications, an upcoming technology that exploits the spin of electrons. They performed simulations to demonstrate the spin properties of these materials and explained the mechanisms behind their behavior.

Scientists at Tokyo Institute of Technology proposed new quasi-1D materials for potential spintronic applications, an upcoming technology that exploits the spin of electrons. They performed simulations to demonstrate the spin properties of these materials and explained the mechanisms behind their behavior.
Conventional electronics is based on the movement of electrons and mainly concerns their electric charge; unfortunately, we are close to reaching the physical limits for improving electronic devices. However, electrons bear another intrinsic quantum-physical property called "spin," which can be interpreted as a type of angular momentum and can be either "up" or "down." While conventional electronic devices do not deploy the spin of the electrons that they employ, spintronics is a field of study in which the spin of the conducting electrons is crucial. Serious improvements in performance and new applications can be attained through "spin currents."
As promising as spintronics sound, researchers are still trying to find convenient ways of generating spin currents with material structures that possess electrons with desirable spin properties. The Rashba-Bychkov effect (or simply Rashba effect), which involves a splitting of spin-up and spin-down electrons due to breakings in symmetry, could potentially be exploited for this purpose. A pair of researchers from Tokyo Institute of Technology, including Associate Professor Yoshihiro Gohda, have proposed a new mechanism to generate a spin current without energy loss from a series of simulations for new quasi-1D materials based on bismuth-adsorbed indium that exhibit a giant Rashba effect. "Our mechanism is suitable for spintronic applications, having an advantage that it does not require an external magnetic field to generate nondissipative spin current," explains Gohda. This advantage would simplify potential spintronic devices and would allow for further miniaturization.
The researchers conducted simulations based on these materials to demonstrate that the Rashba effect in them can be large and only requires applying a certain voltage to generate spin currents. By comparing the Rashba properties of multiple variations of these materials, they provided explanations for the observed differences in the materials' spin properties and a guide for further materials exploration.
This type of research is very important as radically new technologies are required if we intend to further improve electronic devices and go beyond their current physical limits. "Our study should be important for energy-efficient spintronic applications and stimulating further exploration of different 1D Rashba systems," concludes Gohda. From faster memories to quantum computers, the benefits of better understanding and exploiting Rashba systems will certainly have enormous implications.

Thursday, 3 January 2019

Controlling neurons with light -- but without wires or batteries

Dear Friends,
        Amazing Research of 'University of Arizona College of Engineering'

University of Arizona biomedical engineering professor Philipp Gutruf is first author on the paper Fully implantable, optoelectronic systems for battery-free, multimodal operation in neuroscience research, published in Nature Electronics.

Optogenetics is a biological technique that uses light to turn specific neuron groups in the brain on or off. For example, researchers might use optogenetic stimulation to restore movement in case of paralysis or, in the future, to turn off the areas of the brain or spine that cause pain, eliminating the need for -- and the increasing dependence on -- opioids and other painkillers.

"We're making these tools to understand how different parts of the brain work," Gutruf said. "The advantage with optogenetics is that you have cell specificity: You can target specific groups of neurons and investigate their function and relation in the context of the whole brain."

In optogenetics, researchers load specific neurons with proteins called opsins, which convert light to electrical potentials that make up the function of a neuron. When a researcher shines light on an area of the brain, it activates only the opsin-loaded neurons.

The first iterations of optogenetics involved sending light to the brain through optical fibers, which meant that test subjects were physically tethered to a control station. Researchers went on to develop a battery-free technique using wireless electronics, which meant subjects could move freely.

But these devices still came with their own limitations -- they were bulky and often attached visibly outside the skull, they didn't allow for precise control of the light's frequency or intensity, and they could only stimulate one area of the brain at a time.

Taking More Control and Less Space

"With this research, we went two to three steps further," Gutruf said. "We were able to implement digital control over intensity and frequency of the light being emitted, and the devices are very miniaturized, so they can be implanted under the scalp. We can also independently stimulate multiple places in the brain of the same subject, which also wasn't possible before."

The ability to control the light's intensity is critical because it allows researchers to control exactly how much of the brain the light is affecting -- the brighter the light, the farther it will reach. In addition, controlling the light's intensity means controlling the heat generated by the light sources, and avoiding the accidental activation of neurons that are activated by heat.

The wireless, battery-free implants are powered by external oscillating magnetic fields, and, despite their advanced capabilities, are not significantly larger or heavier than past versions. In addition, a new antenna design has eliminated a problem faced by past versions of optogenetic devices, in which the strength of the signal being transmitted to the device varied depending on the angle of the brain: A subject would turn its head and the signal would weaken.

"This system has two antennas in one enclosure, which we switch the signal back and forth very rapidly so we can power the implant at any orientation," Gutruf said. "In the future, this technique could provide battery-free implants that provide uninterrupted stimulation without the need to remove or replace the device, resulting in less invasive procedures than current pacemaker or stimulation techniques."

Devices are implanted with a simple surgical procedure similar to surgeries in which humans are fitted with neurostimulators, or "brain pacemakers." They cause no adverse effects to subjects, and their functionality doesn't degrade in the body over time. This could have implications for medical devices like pacemakers, which currently need to be replaced every five to 15 years.

The paper also demonstrated that animals implanted with these devices can be safely imaged with computer tomography, or CT, and magnetic resonance imaging, or MRI, which allow for advanced insights into clinically relevant parameters such as the state of bone and tissue and the placement of the device.

Posted By Ram Kamal Yadav

Monday, 10 December 2018

New method peeks inside the 'black box' of artificial intelligence

A new method to decode the decision-making processes used by 'black box' machine learning algorithms works by finding the minimum input that will still yield a correct answer. In this example, the researchers first presented an algorithm with a photo of a sunflower and asked 'What color is the flower?' This resulted in the correct answer, 'yellow.' The researchers found that they could get the same correct answer, with a similarly high degree of confidence, by asking the algorithm a single-word question: 'Flower?'

Credit: Shi Feng/University of Maryland

Artificial intelligence -- specifically, machine learning -- is a part of daily life for computer and smartphone users. From autocorrecting typos to recommending new music, machine learning algorithms can help make life easier. They can also make mistakes.

It can be challenging for computer scientists to figure out what went wrong in such cases. This is because many machine learning algorithms learn from information and make their predictions inside a virtual "black box," leaving few clues for researchers to follow.

A group of computer scientists at the University of Maryland has developed a promising new approach for interpreting machine learning algorithms. Unlike previous efforts, which typically sought to "break" the algorithms by removing key words from inputs to yield the wrong answer, the UMD group instead reduced the inputs to the bare minimum required to yield the correct answer. On average, the researchers got the correct answer with an input of less than three words.

In some cases, the researchers' model algorithms provided the correct answer based on a single word. Frequently, the input word or phrase appeared to have little obvious connection to the answer, revealing important insights into how some algorithms react to specific language. Because many algorithms are programmed to give an answer no matter what -- even when prompted by a nonsensical input -- the results could help computer scientists build more effective algorithms that can recognize their own limitations.

The researchers will present their work on November 4, 2018 at the 2018 Conference on Empirical Methods in Natural Language Processing.

"Black-box models do seem to work better than simpler models, such as decision trees, but even the people who wrote the initial code can't tell exactly what is happening," said Jordan Boyd-Graber, the senior author of the study and an associate professor of computer science at UMD. "When these models return incorrect or nonsensical answers, it's tough to figure out why. So instead, we tried to find the minimal input that would yield the correct result. The average input was about three words, but we could get it down to a single word in some cases."

In one example, the researchers entered a photo of a sunflower and the text-based question, "What color is the flower?" as inputs into a model algorithm. These inputs yielded the correct answer of "yellow." After rephrasing the question into several different shorter combinations of words, the researchers found that they could get the same answer with "flower?" as the only text input for the algorithm.

In another, more complex example, the researchers used the prompt, "In 1899, John Jacob Astor IV invested $100,000 for Tesla to further develop and produce a new lighting system. Instead, Tesla used the money to fund his Colorado Springs experiments."

They then asked the algorithm, "What did Tesla spend Astor's money on?" and received the correct answer, "Colorado Springs experiments." Reducing this input to the single word "did" yielded the same correct answer.

The work reveals important insights about the rules that machine learning algorithms apply to problem solving. Many real-world issues with algorithms result when an input that makes sense to humans results in a nonsensical answer. By showing that the opposite is also possible -- that nonsensical inputs can also yield correct, sensible answers -- Boyd-Graber and his colleagues demonstrate the need for algorithms that can recognize when they answer a nonsensical question with a high degree of confidence.

"The bottom line is that all this fancy machine learning stuff can actually be pretty stupid," said Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. "When computer scientists train these models, we typically only show them real questions or real sentences. We don't show them nonsensical phrases or single words. The models don't know that they should be confused by these examples."

Most algorithms will force themselves to provide an answer, even with insufficient or conflicting data, according to Boyd-Graber. This could be at the heart of some of the incorrect or nonsensical outputs generated by machine learning algorithms -- in model algorithms used for research, as well as real-world algorithms that help us by flagging spam email or offering alternate driving directions. Understanding more about these errors could help computer scientists find solutions and build more reliable algorithms.

"We show that models can be trained to know that they should be confused," Boyd-Graber said. "Then they can just come right out and say, 'You've shown me something I can't understand.'"

In addition to Boyd-Graber, UMD-affiliated researchers involved with this work include undergraduate researcher Eric Wallace; graduate students Shi Feng and Pedro Rodriguez; and former graduate student Mohit Iyyer (M.S. '14, Ph.D. '17, computer science).

The research presentation, "Pathologies of Neural Models Make Interpretation Difficult," Shi Feng, Eric Wallace, Alvin Grissom II, Pedro Rodriguez, Mohit Iyyer, and Jordan Boyd-Graber, will be presented at the 2018 Conference on Empirical Methods in Natural Language Processing on November 4, 2018.

This work was supported by the Defense Advanced Research Projects Agency (Award No. HR0011-15-C-011) and the National Science Foundation (Award No. IIS1652666). The content of this article does not necessarily reflect the views of these organizations.

Friday, 7 December 2018

Flexible electronic skin aids human-machine interactions

HM interactions

 Human skin contains sensitive nerve cells that detect pressure, temperature and other sensations that allow tactile interactions with the environment. To help robots and prosthetic devices attain these abilities, scientists are trying to develop electronic skins. Now researchers report a new method that creates an ultrathin, stretchable electronic skin, which could be used for a variety of human-machine interactions.

दोस्तों ,American Chemical Society ने  वर्चुअल रियलिटी व कुछ एप्लीकेशन के लिए एक इलेक्ट्रॉनिक स्किन का निर्माण किया है जो पूरी  तरह से फ्लेक्सिबल है इसका प्रयोग हम tatoo जैसे किया जा सकता  है to आज इसी की कुछ terminology से हम आपको अवगत कराते हैं|
कृत्रिम उपकरणों, पहनने योग्य स्वास्थ्य मॉनीटर, रोबोटिक्स और आभासी वास्तविकता सहित कई अनुप्रयोगों के लिए इलेक्ट्रॉनिक त्वचा का उपयोग किया जा सकता है। एक बड़ी चुनौती अल्ट्राथिन इलेक्ट्रिकल सर्किट को जटिल 3 डी सतहों पर स्थानांतरित कर रही है और फिर इलेक्ट्रॉनिक्स को मोड़ने योग्य और खिंचाव की अनुमति देने के लिए काफी विस्तारित किया जा रहा है। कुछ वैज्ञानिकों ने इस उद्देश्य के लिए लचीला "इलेक्ट्रॉनिक टैटू" विकसित किया है, लेकिन उनका उत्पादन आमतौर पर धीमा, महंगा होता है और फोटोलिथोग्राफी जैसे साफ-सफाई फैब्रिकेशन विधियों की आवश्यकता होती है। महमूद तावकोली, कर्मेल मजीदी और सहयोगी एकीकृत माइक्रोइलेक्ट्रॉनिक्स के साथ पतली फिल्म सर्किट बनाने के लिए एक तेज़, सरल और सस्ती विधि विकसित करना चाहते थे।
नए दृष्टिकोण में, शोधकर्ताओं ने एक साधारण डेस्कटॉप लेजर प्रिंटर के साथ स्थानांतरण टैटू पेपर की एक शीट पर एक सर्किट टेम्पलेट पैटर्न किया। फिर उन्होंने टेम्पलेट को चांदी के पेस्ट के साथ लेपित किया, जो केवल मुद्रित टोनर स्याही का पालन करता था। चांदी के पेस्ट के शीर्ष पर, टीम ने गैलियम-इंडियम तरल धातु मिश्र धातु जमा किया जो सर्किट की विद्युत चालकता और लचीलापन में वृद्धि हुई। आखिरकार, उन्होंने पॉलीविनाइल अल्कोहल जेल में एम्बेडेड लंबवत गठबंधन चुंबकीय कणों से बने एक प्रवाहकीय "गोंद" के साथ माइक्रोचिप्स जैसे बाहरी इलेक्ट्रॉनिक्स जोड़े। शोधकर्ताओं ने इलेक्ट्रॉनिक टैटू को विभिन्न वस्तुओं में स्थानांतरित कर दिया और नई विधि के कई अनुप्रयोगों का प्रदर्शन किया, जैसे रोबोट कृत्रिम भुजा को नियंत्रित करना, मानव कंकाल मांसपेशी गतिविधि की निगरानी करना और हाथ के 3 डी मॉडल में निकटता सेंसर को शामिल करना।
to दोस्तों 3d मॉडल और इलेक्ट्रॉनिक tatooये वो चीजे है जो हमारे हाथो व शरीर के किशी ना किशी अंगो pe आने वाले भाबिस्य में gum के साथ चिपकी हुई जरूर मिलेंगी ....थैंक्यू  By RAM KAMAL YADAV

Wednesday, 25 July 2018

Quantum step forward in protecting communications from hackers

Quantum step forward in protecting communications from    hackers..  

दोस्तों,   हमने आप को पिछले  पोस्ट में बताया था की सुचना का मानव जीवन में क्या उपयोग है और इंसान किस परकार से सूचनाओं पर डिपेंड है | दोस्तो आज के इस पोस्ट में आपको बहुत सारे टर्म भले ही न समझ आये लेकिन ये टर्म आने वाले समय में आप को कहीं न कहीं हस्तक्षेपित करते हुए जरुर  मिलेंगे तो एक बार इस पोस्ट को जरुर पढ़े ताकि दुनिया के नए परिवर्तन से आप परिचित रहे |सब कुछ न सही लेकिन कुछ न कुछ आपको जरुर मिलेगा जो आपको सामान्य लोगो की तुलना में  दुनिया से कुछ आगे लेके जायेगा | इस पोस्ट में हम  कम्युनिकेशन /सूचनाओं  के प्रोटेक्शन्स का Quantum step बताएँगे |  

Quantum step forward in protecting communications from hackers:     Researchers के according संचार लाइनों के साथ सुरक्षित जानकारी वितरित करने के लिए एक नई Quantum-आधारित प्रक्रिया गंभीर सुरक्षा उल्लंघनों को रोकने में सफल हो सकती है।अस्पताल के रिकॉर्ड और बैंक विवरण जैसे अत्यधिक संवेदनशील जानकारी को सुरक्षित करना, दुनिया भर में कंपनियों और संगठनों द्वारा सामना की जाने वाली एक बड़ी चुनौती है।मानक संचार प्रणाली हैक्स के लिए कमजोर हैं, जहां एन्क्रिप्टेड जानकारी को अवरुद्ध और प्रतिलिपि बनाई जा सकती है। वर्तमान में हैकर्स संचारित जानकारी की प्रतिलिपि बनाने के लिए संभव है, लेकिन इसे सुरक्षित रखने वाले एन्क्रिप्शन को तोड़ने की विधि के बिना इसे पढ़ना संभव नहीं होगा।इसका मतलब यह है कि जानकारी समय के लिए सुरक्षित हो सकती है, लेकिन इस बात की कोई गारंटी नहीं है कि यह हमेशा के लिए सुरक्षित रहेगा, क्योंकि विकास में  Supercomputer संभावित रूप से भविष्य में विशेष एन्क्रिप्शन को समझ सकते हैं।यॉर्क के शोधकर्ताओं ने क्वांटम यांत्रिकी के सिद्धांतों के आधार पर एक प्रोटोटाइप की जांच की, जिसमें वर्तमान संचार की कमजोरियों को साइड-स्टेप करने की क्षमता है, लेकिन भविष्य में जानकारी को सुरक्षित रखने की भी अनुमति है।

Powerful attack: यॉर्क के कंप्यूटर विज्ञान विभाग के विश्वविद्यालय से डॉ कॉस्मो लुपो ने कहा: "Quantum mechanics एक लंबा सफर तय कर चुका है, लेकिन हमें अभी भी उन महत्वपूर्ण समस्याओं का सामना करना पड़ रहा है जिन्हें आगे प्रयोग के साथ दूर किया जाना है।"ऐसी एक समस्या यह है कि एक हैकर जानकारी रखने वाले फोटोनों को इकट्ठा करने और मापने के लिए उपयोग किए जाने वाले डिटेक्टरों को जाम करके सूचना संचरण के लिए उपयोग किए जाने वाले इलेक्ट्रॉनिक उपकरणों पर हमला कर सकता है।"ऐसा हमला शक्तिशाली है क्योंकि हम मानते हैं कि एक दिया गया डिवाइस अपने तकनीकी विनिर्देशों के अनुसार काम करता है और इसलिए अपना काम करेगा। यदि कोई हैकर एक डिटेक्टर पर हमला करने और जिस तरह से काम करता है उसे बदल सकता है, तो सुरक्षा को अनिवार्य रूप से समझौता किया जाता है।""Quantum mechanics के अनुसार, हालांकि इलेक्ट्रॉनिक उपकरणों का काम कैसे करेंगे, इस पर विचार किए बिना भी संचार सुरक्षा की अनुमति देता है। इन धारणाओं को दूर करके हम संचार दर को कम करने की कीमत का भुगतान करते हैं, लेकिन सुरक्षा मानक में सुधार लाने के लिए भुगतान करते हैं।

Two signals: संभवतः समझौता किए गए इलेक्ट्रॉनिक घटकों पर निर्भर होने के बजाय जिस सूचना पर जानकारी का पता लगाना और पढ़ना आवश्यक है, शोधकर्ताओं ने पाया कि अगर अविश्वासित डिटेक्टर संचार में एक अलग बिंदु पर मौजूद थे - प्रेषक और रिसीवर के बीच संचार  अधिक  सुरक्षित था।डिटेक्टर को दो सिग्नल का संयोजन मिलेगा, प्रेषक से एक और रिसीवर से एक। डिटेक्टर केवल इस संयुक्त सिग्नल के परिणाम को पढ़ने में सक्षम होगा, लेकिन इसके घटक  नहीं।डॉ लूपो ने कहा: "हमारे काम में, न केवल हमने पहले कठोर गणितीय प्रमाण प्रदान किए हैं कि यह 'डिटेक्टर-स्वतंत्र' डिज़ाइन काम करता है, लेकिन हमने एक ऐसी योजना भी मानी है जो मौजूदा ऑप्टिकल फाइबर संचार नेटवर्क के अनुकूल है।"सिद्धांत रूप में हमारा प्रस्ताव वास्तविक बुनियादी ढांचे में बड़े बदलाव किए बिना इंटरनेट पर अटूट कोडों के आदान-प्रदान की अनुमति दे सकता है।"हम अभी भी प्रोटोटाइप चरण में हैं, लेकिन इन प्रणालियों की लागत को कम करने के तरीकों को ढूंढकर, हम Quantum communication को वास्तविकता बनाने के करीब हैं।"दोस्तों ,ये पोस्ट अगर आपको  ये पोस्ट पढ़कर  लगे की हमने आज कुछ ऐसा जाना जो कुछ ही लोगो को पता है तो जाहिर सी बात है कि आप लोगो से अलग हो रहे हैं तो इस पोस्ट को लाइक शेयर और कमेंट करें|


Wednesday, 18 July 2018

Importance of information in Human life..

    Importance of information in Human life..

दोस्तों , आज की परिवर्तित दुनिया में खुद को इस दुनिया के साथ परिवर्तित करना लोगो के लिए एक चुनौती बन गयी है | और नितांत जरुरी भी है की लोग इस दुनिया के अनुसार अपने आप को बदले,तभी एक नए समाज और नए मानविक जीवन की कल्पना को यथार्थ किया जा सकता है तो दोस्तों मै आज आप लोगो को इस पोस्ट के जरिये अपने मन की एक कल्पित सोच को आप लोगो के साथ शेयर करना चाहूँगा | आज कोई टेक्नोलॉजी की बातें नहीं होंगी लेकिन हाँ यदि मानविक विचारों की बात करें तो यह जरुर उनके भविष्य के लिए हितकर होंगी ..

 कुछ स्टेप इस प्रकार हैं सबसे पहले..
1. खुद को परिवर्तित करने के लिए अपनी सोच को परिवर्तित करना पड़ेगा ,तो जाहिर सी बात है अगर सोच हमारे लिए इतना महत्वपूर्ण है तो सोच को कैसे परिवर्तित किया जाये 
तो दोस्तों हमारी सोच के निर्माता हमारे आस पास का वातावरण होता है आस पास के वातावरण से आशय हमारे दोस्त ,हमारे टीचर्स ,गैजेट तथा वो सारी  चीज़े जिसे हम देखते और सुनते हैं|

2. सूचनाओ का मानव जीवन में प्रभाव - मानव जीवन में सूचनाओ का बहुत ही महत्वपूर्ण प्रभाव होता है चूकी हम बचपन से जिस प्रकार की सूचनाओ को सुना और जाना तो हमारे दिमाग ने उसी सूचनाओ पर विचार किया और हमारे शरीर ने उसी प्रकार कार्य किया |
सच कहा जाये तो आज कोई भी इंसान जो कुछ भी है जिस भी हालात में है वह उन सूचनाओ और अपने उन विचारों के कारण ही है जो उसे पहले से प्राप्त हुई थी |

3.सुचना और क्रिएटिविटी में सम्बन्ध- सुचना और क्रिएटिविटी में बहुत ही गूढ़ सम्बन्ध है जब हम किसी प्रकार की सूचनाओ को सुनते या जानते है तो हमारा दिमाग उस पर अपना विचार प्रस्तुत करता है जिससे उस सुचना से  सम्बंधित क्रिएटिविटी का जन्म होता है और यही क्रिएटिविटी बेहतर होकर आईडिया में बदल जाती है जोकि मानव जीवन के लिए परिवार्तानिये चाभी है |

4.सूचनाओ की खोज - मनुष्य के सभी गुणों में यह भी एक गुण है की उसमे सूचनाओ की खोज का परसेंटेज क्या है जो की उसके विचारोको परिवर्तित कर सके इसलिए मानव को सुचना खोज के प्रति अग्रसर होना पड़ेगा |

निष्कर्स- चूँकि मनुष्य तेज़ी से परवर्तित दुनिया के साथ नहीं बदल सकता परन्तु नवीन सूचनाओ की खोज से अपने विचारों को बदल सकता है और सच मानिये  ये विचार आपकी जिंदगी और आपको खुद बा खुद दुनिया के साथ लाकर खड़ा कर देंगे| इसीलिए हम अपने इस ब्लॉग पर नवीन सूचनाओ को आप लोगो के सामने रखता हूँ ताकि आप लोगो को इस परिवर्तित दुनिया को जान सको तथा अपने विचारों को नयी गति दे सको जिससे नए आईडिया जन्म हो तथा उस आईडिया से मानव समाज का भला हो सके|
धन्यवाद .
दोस्तों ये पोस्ट आपको अगर आपको अच्छी लगी हो तो फेसबुक और ट्विटर पर इसे आप शेयर करे जिससे आपके मित्रों को लाभ मिल सके | दोस्तों यदि आपके मन में कोई सवाल हो तो आप studentdesk.com  पर जाएँ जो हमने ख़ास स्टूडेंट्स को ध्यान  में रखकर बनाया है आप अपने questions को वहां drop कर सकते हैं.|

Friday, 6 July 2018

Future ultrahigh density data storage..

Future ultrahigh density data storage..

The development of high-density data storage devices requires the highest possible density of elements in an array made up of individual nanomagnets. The closer they are together, the greater the magnetic interactions between them.

आज चुंबकीय रिकॉर्डिंग जन डेटा भंडारण के लिए अग्रणी तकनीक है। क्लाउड कंप्यूटिंग की सफलता से इसकी प्रमुख भूमिका को मजबूत किया जा रहा है, जिसके लिए सर्वरों की भीड़ पर बड़ी मात्रा में डेटा संग्रह और प्रबंधन की आवश्यकता होती है। फिर भी, हार्ड डिस्क स्टोरेज उद्योग वर्तमान में एक चौराहे पर है क्योंकि मौजूदा चुंबकीय रिकॉर्डिंग प्रौद्योगिकियां 1 टीबीटी / इन 2 से परे घनत्व प्राप्त करने में असमर्थ हैं।

टेराबिट शासन में रिकॉर्डिंग घनत्व को धक्का देने के लिए सिग्नल-टू-शोर अनुपात, थर्मल स्थिरता और उत्तरदायित्व के मुद्दों को हल करने के लिए नई स्टोरेज सामग्री, उपन्यास रिकॉर्डिंग योजनाएं और मीडिया डिज़ाइन की आवश्यकता होती है। इस पुस्तक में, विश्वविद्यालयों, सार्वजनिक शोध संस्थानों और उद्योगों के विश्वव्यापी विशेषज्ञ मीडिया परिप्रेक्ष्य से चुंबकीय रिकॉर्डिंग में नवीनतम प्रगति को दर्शाने और प्रौद्योगिकी की भविष्य की संभावनाओं को उजागर करने के लिए सहयोग करते हैं। सैद्धांतिक, प्रयोगात्मक, और तकनीकी पहलुओं को एक स्पष्ट और व्यापक तरीके से शामिल किया गया है, जिससे पुस्तक चुंबकीय रिकॉर्डिंग क्षेत्र में अंतिम वर्ष के स्नातक, स्नातकोत्तर और शोध पेशेवरों के लिए उपयोगी संदर्भ बना रही है।

सूचना में विस्फोटक वृद्धि और इलेक्ट्रॉनिक उपकरणों के लघुकरण की मांग नई रिकॉर्डिंग प्रौद्योगिकियों और सामग्रियों की मांग करती है जो उच्च घनत्व, तेज प्रतिक्रिया, लंबे प्रतिधारण समय और पुनर्लेखन क्षमता को जोड़ती हैं। भविष्यवाणी के अनुसार, मौजूदा सिलिकॉन-आधारित कंप्यूटर सर्किट अपनी भौतिक सीमा तक पहुंच रहे हैं। अल्ट्रा हाई स्पीड मोबाइल कंप्यूटिंग, संचार उपकरणों और परिष्कृत सेंसर जैसे अगली पीढ़ी के आईटी उपकरणों के लिए इलेक्ट्रॉनिक घटकों और डेटा भंडारण घनत्व में वृद्धि के लिए महत्वपूर्ण लघुकरण महत्वपूर्ण है। यह मूल पुस्तक रिकॉर्डिंग तंत्र, सामग्रियों और फैब्रिकेशन प्रौद्योगिकियों के पहलुओं से उच्च घनत्व डेटा संग्रहण पर महत्वपूर्ण अनुसंधान उपलब्धियों के लिए एक व्यापक परिचय प्रस्तुत करती है, जो वर्तमान डेटा स्टोरेज सिस्टम की भौतिक सीमाओं पर काबू पाने के लिए वादा कर रहे हैं। पुस्तक भविष्य में सूचना भंडारण के लिए अनुकूलित सामग्री, प्रौद्योगिकियों और डिवाइस संरचनाओं के विकास के लिए एक उपयोगी मार्गदर्शिका के रूप में कार्य करती है, और भविष्य में पाठकों को सूचना प्रौद्योगिकी की आकर्षक दुनिया में ले जाएगी।

क सूक्ष्म विद्युत यांत्रिक प्रणालियों (एमईएमएस) डिवाइस प्रस्तुत किया गया  है किजो  बहुत ही उच्च घनत्व चुंबकीय मीडिया और बहुत ही उच्च घनत्व सीडी रोम पढ़ सकते हैं। चुंबकीय और ऑप्टिकल दोनों पढ़ने वाले हेड में एक या अधिक ठंड कैथोड एमईएमएस ई-बीम कोशिकाएं शामिल हैं। ई-बीम लिए डेटा बिट  की कोशिश जाती है और  बिट के state एक डिटेक्टर से निर्धारित होता है  ऐसी कोशिकाओं के बड़े सरणी एक साथ मेमोरी मीडिया के बड़े क्षेत्रों को पढ़ सकते हैं

Popular Posts