Broadband Network  

Posted by technology2day


Broadband Network

In general, broadband refers to telecommunication in which a wide band of frequencies is available to transmit information. Because a wide band of frequencies is available, information can be multiplexed and sent on many different frequencies or channels within the band concurrently, allowing more information to be transmitted in a given amount of time (much as more lanes on a highway allow more cars to travel on it at the same time). Related terms are wideband (a synonym), baseband (a one-channel band), and narrowband (sometimes meaning just wide enough to carry voice, or simply "not broadband," and sometimes meaning specifically between 50 cps and 64 Kpbs).

Various definers of broadband have assigned a minimum data rate to the term. Here are a few:
• Newton's Telecom Dictionary: "...greater than a voice grade line of 3 KHz...some say [it should be at least] 20 KHz."
• Jupiter Communications: at least 256 Kbps.
• IBM Dictionary of Computing: A broadband channel is "6 MHz wide."
It is generally agreed that Digital Subscriber Line (DSL) and cable TV are broadband services in the downstream direction.

Broadband Traffic

Types of traffic carried by the network
Modern networks have to carry integrated traffic consisting of voice, video and data. The Broadband Integrated Services Digital Network (B-ISDN) satisfies these needs . The types of traffic supported by a broadband network can be classified according to three characteristics
• Bandwidth is the amount of network capacity required to support a connection.
• Latency is the amount of delay associated with a connection. Requesting low latency in the Quality of Service (QoS) profile means that the cells need to travel quickly from one point in the network to another.
• Cell-delay variation (CDV) is the range of delays experienced by each group of associated cells. Low cell-delay variation means a group of cells must travel through the network without getting too far apart from one another.

Modern communication services
Society is becoming more informationally and visually oriented. Personal computing facilitates easy access, manipulation, storage, and exchange of information, and these processes require reliable data transmission. The means or media for communicating data are becoming more diverse. Communicating documents by images and the use of high-resolution graphics terminals provide a more natural and informative mode of human interaction than do voice and data alone. Video teleconferencing enhances group interaction at a distance. High-definition entertainment video improves the quality of pictures, but requires much higher transmission rates.
These new data transmission requirements may require new transmission means other than the present overcrowded radio spectrum . A modern telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences between traditional (telephony) and modern communication services
Conventional telephony communicates using:
• the voice medium only,
• connects only two telephones per call, and
• uses circuits of fixed bit-rates.
In contrast, modern communication services depart from the conventional telephony service in these three essential aspects. Modern communication services can be:
• multi-media,
• multi-point, and
• multi-rate.
These aspects are examined individually in the following three sub-sections
Multi-media
A multi-media call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication quality, such as:
• bandwidth requirement,
• signal latency within the network, and
• signal fidelity upon delivery by the network.
The information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition, and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network .
Multi-point
A few examples will be used to contrast point-to-point communications with multi-point communications. Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one-way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Offering future services thus requires flexible management of the connection and media requests of a multi-point, multi-media communication call [3][4].
Multi-rate
A multi-rate service network is one which flexibly allocates transmission capacity to connections. A multi-media network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates

Wireless Broadband Networks
What is a Wireless Broadband Network?
Wireless broadband terminology should not be confused with the generic term “broadband networking” or BISDN (Broadband Integrated Services Digital Network), which refers to various network technologies (fiber or optical) implemented by ISPs and NSPs to achieve transmission speeds higher than 155 Mbps for the Internet backbone. In a lay-person’s terms, BISDN is the wire and cable that run through walls, under floors, from pole to telephone pole, and beneath feet on a city street. BISDN is a concept and a set of services and developing standards for integrating digital transmission services in a broadband network of fiber optic and radio media. BISDN encompasses frame relay service for high-speed data that can be sent in large bursts, the Fiber Distributed-Data Interface (FDDI), and the Synchronous Optical Network (SONET). BISDN supports transmission from 2 Mbps to much higher transfer rates.
Wireless broadband, on the other hand, refers to the wireless network technology that addresses the “last mile” problem whereby we can connect isolated customer premises to an ISP or carrier’s backbone network without leasing traditional T-1 and higher speed copper or fiber channels from your local telecommunication service provider. Wireless broadband refers to fixed wireless connectivity that can be utilized by enterprises, businesses, households and telecommuters who travel from one fixed location to another fixed location. In its current implementation, it does not address the needs of “mobile users” on the road.

Technologically, wireless broadband is an extension of the point-to-point, wireless-LAN bridging concept to deliver high-speed and high capacity pipe that can be used for voice, multi-media and Internet access services. While in simple implementations, primary use of wireless broadband is for connecting LANs to the Internet, in more sophisticated implementations, you may connect multiple services (data, voice, video) over the same pipe. The latter requires multiplexing equipment at customer premises or in a central hub.
From an implementation perspective Wireless Broadband circumvents physical telecommunications networks; it is as feasible in rural as it is in urban areas. For topographies that haven’t yet technically evolved to cable and copper wire infrastructures, vendor solutions circumventing costly installation, maintenance and upgrades, means skipping 120 years of telecommunications evolution. In other areas, deregulation is making the licensing process for Wireless Service Providers (WSPs) hassle free.
Wireless broadband is faster to market, and subscribers are added incrementally, bypassing those installations that are required before wired subscribers can connect.
What Wireless Broadband is Not?
Wireless broadband is for fixed wireless connection - it does not address the mobility needs which at present only 2.5 G and 3G networks intend to provide. In future, there is a technical possibility that broadband wireless radios can be miniaturized and installed in handheld devices. Then they might be able to augment 3G in mobile applications However, it is only a possibility of the physics and electronics - none of the vendors have any prototype products in this area. Of course, wireless broadband is expected to meet the needs of residential connections to the Internet bypassing local telcos.
Four technologies for faster broadband in 2011
10G GPON


The use of PON (passive optical network) technology in fixed broadband networks has grown in popularity in the last couple of years, thanks to lower costs compared to using an optical fiber for each household. The technology calls for several households to share the same capacity, which is sent over a single optical fiber.
Today's systems have an aggregate download capacity of 2.5G bps (bits per second). The move to 10G GPON increases that by a factor of four, hence the name. The technology is also capable of an upstream capacity of 10G bps, which is eight times faster than current networks, according to Verizon Communications.
The increased capacity can either be used to handle more users or increase the bandwidth.
In December 2009, Verizon announced that it had conducted the first field-test for the technology. Since then, a number of operators have conducted tests, including France Telecom, Telecom Italia, Telefonica, Portugal Telecom, China Mobile and China Unicom, according to Huawei.
The first commercial services based on 10G GPON are expected in the second half 2011, according to Alcatel-Lucent. Pioneering operator Verizon hasn't announced any commercial plans yet, according to a spokesman.
Besides broadband, the technology is also being pitched for mobile backhaul use.

VDSL2
The DSL family of technologies still dominates the fixed broadband world. To ensure that operators can continue to use their copper networks, network equipment vendors are adding some new technologies to VDSL2 to increase download speeds to several hundred megabits per second
To boost DSL to those kinds of speeds, the vendors are using a number of technologies. One way is to send traffic over several copper pairs at the same time, compared to traditional DSL, which only uses one copper pair. This method then uses a technology -- called DSL Phantom Mode by Alcatel-Lucent and Phantom DSL by Nokia Siemens -- that can create a third virtual copper pair that sends data over a combination of two physical pairs.
However, the use of these technologies also creates crosstalk, a form of noise that degrades signal quality and decreases bandwidth. To counteract that, vendors are using a noise-canceling technology called vectoring. It works the same way as noise-canceling headphones, continuously analyzing the noise conditions on the copper cables, and then creates a new signal to cancel it out, according to Alcatel-Lucent.
Products are now entering field trials and the first commercial services are expected to be launched in 2011. Just like 10G GPON, it is also being pitched as an alternative for mobile backhaul.
LTE
The rollout of LTE (Long Term Evolution) is now under way in Europe, Asia and the U.S., and by the end of 2011 about 50 LTE commercial networks will have launched, according to an October report from the Global Mobile Suppliers Association (GSA) detailing current operator launch plans.

The first round of LTE services, with the exception of MetroPCS and its Samsung Craft phone, connects users with USB modems. That will change in 2011 with the arrival of LTE capable smartphones and tablets. Verizon Wireless expects such handsets will be available by mid-2011, according to a statement.

The bandwidth and coverage operators can offer depends on their spectrum holdings.

LTE isn't just about offering higher speeds to metropolitan areas. In Germany, the government has mandated that the mobile operators first use the technology to offer broadband to rural areas.

Besides higher speeds LTE also offers lower latencies, which will help the performance of real-time applications sensitive to delays, including VoIP (Voice over Internet Protocol) video streaming, video conferencing and gaming, to perform better.

The rollout of isn't going to happen overnight, even for those operators that have are now launching services. By 2013, Verizon plans to cover its entire 3G network with LTE. Telenor in Sweden also plans to have its network upgrade complete by 2013, according to a statement.
HSPA+

LTE may be getting most of the attention, but 2010 has been a banner year for HSPA+ (High-Speed Packet Access). Migration to HSPA+ has been a major trend this year and more than one in five HSPA operators have commercially launched HSPA+ networks, according to the GSA.

However, today's download speeds of up to 21Mbps is far from the end of the line for HSPA+. Nine operators -- including Bell Mobility in Canada and Telstra in Australia -- have already launched services at 42Mbps. The average real-world download speed is 7Mbps to 14Mbps, according to Bell.

To get to that speed, operators use a technology called DC-HSPA+ (Dual-Channel High-Speed Packet Access), which sends data using two channels at the same time.

More than 30 DC-HSPA+ (42Mbps) network deployments are on-going or committed to, including T-Mobile in the U.S. It will launch services next year, but isn't ready give any additional details on timing, according to a spokeswoman.

Also, five operators have already committed to 84Mbps, which is the next evolution step for their HSPA+ networks, the first of which is also expected to arrive next year.
IP Broadband Network Management Ericsson's

Ericsson's IP Broadband Network Management portfolio is a cutting-edge network management system solution for IP and broadband networks that contributes to improved service delivery and performance while lowering costs. It includes Ericsson IP Transport NMS, ServiceOn family and NetOp EMS.
Managing the Broadband Revolution
IP Broadband Network Management portfolio manages the complete range of technologies used for the deployment of Ericsson broadband network systems. Managed components include copper, fiber and radio-based transmission systems for both narrowband and broadband data. Managed transport technologies include IP, MPLS-TP, MPLS, Ethernet, PDH, SDH, ODU, ATM, DWDM, CWDM and xDSL.

Multi-layer management
Service, network and element management layers can be seamlessly integrated into a single solution, based on a selection of integrated products from the IP Broadband Network Management portfolio. Key features include administration of customers and services, facilities management, routing of network trails, network map presentation with fault and performance monitoring, equipment and inventory management.

Distributed, scalable and resilient architecture
IP Broadband Network Management can be tailored to the operator's requirements according to functional, organizational, security or geographical criteria. The solution is scalable and can easily grow with the managed network.

Open interfaces for integration
IP Broadband Network Management includes an open integration framework enabling it to serve as a higher-order management system for subordinate managers or network elements. The integration framework supports a number of network management protocols including SNMP, CLI, XML and Corba. A range of strategic 3rd party network element integrations are already supported as ready-to-manage solutions.
IP Broadband Network Management supports the international TeleManagement Forum (TMF) standard to facilitate platform-independent access to system resources from a variety of distributed client applications. This architecture is used to realize a range of northbound and external interfaces including a web-based customer care management application.

Field-proven expertise
Ericsson's IP Broadband Network Management portfolio has been developed using Ericsson field proven worldwide expertise in telecom networks, delivering cost savings to the operators and new capabilities to the IP Broadband network portfolio. The solution exploits completely the characteristics of each network technology, delivers advance functionality like traffic restoration or alarm correlation, and support operators' way of working and procedures.

User-friendly operation
With IP Broadband Network Management the emphasis is on user-friendliness and consistency of presentation to increase operator efficiency and effectiveness, also to minimize training overheads. Network Management is performed by a graphical user interface specially designed to leverage the commonality between different technologies and tasks. The user can move between managed technologies and management layers with ease.

As the IP Broadband Network Management portfolio is inherently modular it is possible to deliver the following levels of network management for all the broadband networks elements:
• Element management solutions, providing centralized, feature-rich maintenance support platform
• Network management solutions, allowing end-to-end provisioning, fault and performance analysis across the entire domain of broadband networks
• Cross-domain management solutions, supporting multi-technology and multi-vendor networks
• Integration capabilities, providing seamless and easy integration of IP Broadband Network Management with customers' existing systems
• Service management solutions, delivering integrated service management across multiple domains
IP Broadband Network Managementfor broadband networks is also open to adaptation and customization to serve any special needs of operators worldwide.
• Over 650 customers worldwide
• Covering all broadband network technologies, including optical and microwave transport and broadband access
• Carrier class, Tier-1 approved with High Availability options
• Scalable to meet Tier 1, Tier 2, new operators and private network requirements
• Unique management features
• Integrated with Ericsson OSS's
Public Safety Broadband Network
Objective
To develop communications and network models for public safety broadband networks, and to analyze and optimize the performance of these networks.
Background
The FCC and the Administration have called for the deployment of a nationwide, interoperable public safety mobile broadband network. Public safety has been allocated broadband spectrum in the 700 MHz band for such a network. The National Public-Safety Telecommunications Council and the FCC have endorsed the 3GPP Long Term Evolution (LTE) standard as the technology of choice for this network.
Network Analysis and Optimization
The Emerging & Mobile Network Technologies Group (EMNTG) performs RF network analysis and optimization of public safety broadband networks using commercially available and in-house customized network modeling and simulation tools. Current efforts are focused on wide area networks based on LTE techology.
Network analyses can be used to predict wide area performance metrics over a defined geographic area, such as the uplink and downlink:
• Signal-to-interference-and-noise ratio,
• Coverage probability, and
• Achievable data rate.

The tools can also be used to optimize network configurations (e.g., sector antenna directions, transmission powers) for various optimization criteria (e.g., maximum coverage area).

Protocol Modeling and Simulation
EMNTG also develops and customizes protocol-level models of wireless technologies such as LTE. These models can be used to perform more detailed simulations of an incident area communication scenario. They include models of the application traffic (e.g., push-to-talk voice, video streams, file transfers), lower layer communication protocols, and the RF channel environment. Performance metrics generated by these models include
• Achieved throughput
• Communication delay
• Packet loss rate
• Network utilization

Protocol models complement the wide area network analysis and optimization tools described above to provide a more complete picture of the expected public safety communication performance. Using the cell sizes, antenna configurations, and transmission powers obtained from a network optimization, the protocol-level analysis generates realistic sector loads which can be fed back to the network analysis and optimization tools for further refinement of the optimized configuration.

Channel Measurements
The fidelity of the predictions generated by the network and protocol models is dependent in large part on the accuracy of the underlying RF channel propagation model. In collaboration with its partners in NIST/PML, EMNTG collects and analyzes channel measurements that are used to develop and tune RF propagation models. Of particular relevance to the Public Safety Broadband Network, these efforts include measurements in the 700 MHz public safety band.
Multiservice broadband network technology
Multiservice broadband networks are on the verge of becoming a reality. The key drivers are rapid advances in technologies; privatization and competition; the convergence of the telecommunication, data communication, entertainment, and publishing industries; and changes in consumers' life-styles.
Wide-area networks today consist of separate networks for voice, private lines, and data services, supported by a common facility infrastructure. Similar separation between voice and data networks exists on enterprise premises (Fig. 1). Residential users typically use copper loop to access voice networks as well as data networks (using voice-band modems).



Fig. 1 Current network architecture.
Public voice and private line networks have been designed for very low latency and with an unrelenting attention to reliability and quality of service. Data networks (especially the public Internet) introduce longer and less predictable delays and are not suitable for highly interactive communication. For the most part, the Internet has not yet been shown to be reliable enough for mission-critical functions.
The time is ripe for these networks to change in fundamental ways. There is a demand for services involving multiple media, increasing intelligence, diverse speeds, and varied quality-of-service requirements. Increasing dependence on network services requires all forms of networking to be as reliable as today's voice network. The revolution taking place in electronics and photonics will provide ample opportunities to satisfy these requirements.
Technological advances
The storage capacity of a single dynamic read-only-memory (DRAM) chip increased from 64 kilobits in 1970 to 256 megabits in 1998, and 4-gigabit chips are possible in research laboratories (Fig. 2 a). Development of multistate transistors and new lithographic techniques (enabling the fabrication of atomic-scale transistors) promise the continuation of this trend. The computing power in a single microprocessor chip increased from 1 million instructions per second (MIPS) in 1981 to about 400 MIPS in 1998, and no limit is in sight (Fig. 2 b). These advances are being translated into explosive growth in switching and routing capacities, information processing capacities, database sizes, and data retrieval speeds. Atomic-scale transistors also promise “system on a chip,” resulting in inexpensive wearable computers, smart appliances, and wireless devices. They also promise less power consumption and longer battery life.

Fig. 2 Progress in semiconductor technologies. (a) Density of dynamic random-access memories (DRAMs). Lengths (in micrometers) are minimum internal spacings of chip components. Advances in manufacturing techniques allow narrower spacing, making
possible higher density and hence larger storage capacity. (b) Microprocessor speeds.
The advances in photonics are even more remarkable. The capacity of a single optical fiber increased from 45 megabits per second in 1981 to 1.7 gigabits per second in 1990 (Fig. 3). A major change occurred with the advent of dense wavelength-division multiplexing (DWDM) and optical amplifiers. Dense wavelength-division multiplexing allows the transport of many colors (wavelengths) of light in one fiber, while optical amplifiers amplify all wavelengths in a fiber simultaneously. These two innovations have made it possible to carry 400 gigabits per second (40 wavelengths each carrying 10 gigabits per second) on a single fiber and 1600 Gbps systems are on the horizon. Experimental work in research laboratories is pushing this capacity to over 3 terabits per second. Recent innovations have made it possible to put 432 fibers in a single cable. One such cable will be able to transport the daily volume of current worldwide traffic in 60 seconds. Many new and existing operators of wide-area networks have already started capitalizing on this growth in transport capacity. Transport capacity in wide-area networks around the world is expected to increase by a factor of up to 1000 by 2005.

Fig. 3 Capacity growth in long-haul optical fibers. Evolution is plotted in both the total capacity and the number of wavelengths employed.
Advances in electronics and photonics will also permit tremendous increases in access speeds, releasing the residential and small business users from the current speed restriction of the “last mile” on the copper loop that links the user to the network. New digital subscriber loop (DSL) technologies use advanced coding and modulation techniques to carry from several hundred kilobits per second to 50 megabits per second on the copper loop. Hybrid fiber-coaxial (HFC) technologies allow the use of cable television channels to provide several megabits per second upstream and up to 40 megabits per second downstream for a group of about 500 homes. Similar access speeds may also be provided by hybrid fiber-wireless technologies. Fiber-to-the-curb (FTTC) and fiber-to-the-home (FTTH) technologies provide up to gigabits per second to single homes by bringing fiber closer to the end users. Passive optical networks (PONs) will allow a single fiber from the central hub to serve several hundred homes by providing one or more wavelengths to each home using wavelength splitting techniques.
Advances in network infrastructure
Innovations in software technologies, network architectures, and protocols will harness this increase in networking capacities to realize true multiservice networks.
Narrowband voice [analog, ISDN (Integrated Services Digital Network), and cellular] and Internet Protocol (IP)–based data end systems are likely to continue operating well into the future, as are the Ethernet-based local-area networks. Most of the present growth is in IP-based end systems and cellular telephones, although many parts of the world are still experiencing enormous growth in analog and ISDN telephones. New services are being developed for the IP-based end systems (such as multimedia collaboration, multimedia call centers, distance learning, networked home appliances, directory assisted networking, on-line language translation, and telemedicine). These services will coexist with the traditional voice and data services. See also: Distance education

One challenge is to find the right network architecture for transporting efficiently the traffic generated by current and new services. An approach involving selective layered bandwidth management (SLBM) provides efficiency, robustness, and evolvability. SLBM uses all or some of the IP, Asynchronous Transfer Mode (ATM), SONET/SDH (synchronized optical network/synchronized digital hierarchy), and optical networking layers, depending on the traffic mix and volume (Fig. 4).


Fig. 4 Protocol layering and evolution. Heavy arrows represent new and future layering. PPP is Point-to-Point protocol; HDLC, High-speed Data-Link Control (used here only for


Fig. 4 Protocol layering and evolution. Heavy arrows represent new and future layering. PPP is Point-to-Point protocol; HDLC, High-speed Data-Link Control (used here only for the purpose of delineating the packets); SDL, Simplified Data-Link protocol; HSSF, High-Speed Synchronous Framing protocol; Cell PL, (ATM) Cell-based Physical Layer.





For the near future, ATM provides the best technology for a true multiservice backbone due to its extensive quality-of-service and traffic management features such as admission controls, traffic policing, service scheduling, and buffer management. An ATM backbone network can support the traditional voice traffic, new packet voice traffic, various types of video traffic, Eithernet traffic, and IP traffic. The ATM Adaptation Layer protocols AAL1, AAL2, and AAL5 facilitate this integration. ATM also enables creation of link-layer (layer 2) virtual private networks (VPNs) over a shared public network infrastructure while providing secure communication and performance guarantees.
Traditional voice, IP, and ATM traffic will be transported over the circuits provided by the SONET/SDH network layer over the optical (wavelength) layer. SONET/SDH networks allow partitioning of the wavelength capacity into lower-granularity circuits which can be used by this traffic for efficient networking. The SONET/SDH layer also provides extensive performance monitoring. Finally, SONET/SDH networks, formed using ring topology, allow simple and fast (several tens of milliseconds) rerouting around a failure in a link or a node. This fast restoration of service makes such failures transparent to the service layers. The optical layer will consist of point-to-point fiber links over which many wavelengths will be multiplexed to allow very efficient use of the fiber capacity. Many public carriers and enterprise networks will use this multilayered networking. However, all these situations will change over time.
Rapid growth of IP end systems and high access speeds to the wide-area networks will create high point-to-point IP demands in the core backbone. At this level of traffic, the finer bandwidth partitioning provided by the ATM layer may not compensate for the additional protocol and equipment overhead. Thus, an increasingly higher portion of IP traffic will be carried directly over SONET/SDH circuits. At even higher traffic levels, even the partitioning provided by the SONET/SDH layer will be unnecessary. Meanwhile, optical cross connects and add-drop multiplexers will allow true optical-layer networking using ring or mesh topology. New techniques and algorithms being developed for very fast restoration at the optical layer will further reduce the need for the SONET/SDH layer. IP-over-wavelength networking is then expected to develop.
Of course, this simplification in the network infrastructure requires IP- and Ethernet-based networks to multiplex voice, data, video, and multimedia services directly over the SONET/SDH or dense wavelength-division multiplexing layer. Ethernet and IP protocols are being enhanced to allow such multiplexing.
New high-capacity local-area network switches (layer 2/3 switches) provide fast (10–1000 megabits per second) interfaces and large switching capacities. They also eliminate the need for contention-based access over shared-media local-area networks. Protocol standards (802.1p and 802.1q) are being defined to provide multiple classes of service over such switches by providing different delay and loss controls. Similarly, router technology and Internet protocols are beginning to eliminate the bottlenecks caused by software-based forwarding, rigid routing, and undifferentiated packet processing. New IP switches (layer 3 switches) use hardware-based packet forwarding in each input port, permitting very high capacities (60–1000 gigabits per second, or 50–1000 million packets per second). Protocols are being defined to use the Differential Service (DS) field in the header of each IP packet to signal the quality-of-service requirement of the application being supported by that packet. Intelligent buffering and packet scheduling algorithms in IP switches can use this field to provide differential quality of service as measured by delay, jitter, and losses. At the interface between layer 2 and layer 3 switches, conversion between 802.1p- and DS-based signaling can provide seamless quality-of-service management within enterprise and carrier networks. Resource reservation protocols are being defined to allow the application of signal resource requirements to the network. Traffic-policing algorithms will guarantee that the traffic entering the network is consistent with the resources reserved by the reservation protocols. Resource reservation protocols and traffic policing will further help quality-of-service management. Two other factors will add to the quality-of-service capability of IP networks. In particular, hierarchical classification of IP traffic using additional fields in the header will allow bandwidth guarantees and delay control for individual or aggregated flows based on the users as well as applications. Also, ongoing work on Multi Protocol Label Switching (MPLS) will allow flexible routing and traffic engineering in IP networks.
With these capabilities, public carriers can offer IP-layer (layer 3) virtual private networks to many enterprise customers over a shared infrastructure. Carrier-based policy servers, service and resource provisioning servers, and directories will interact with enterprise policy servers to map the enterprise requirements into carrier actions consistent with the overall virtual private network contract.
While backbone networks are likely to become more uniform, many different access technologies (such as copper loop, wireless, fiber, coaxial cable, fiber-cable hybrid, and satellite) will continue to play major roles in future networks. Standards are being defined for each of the access technologies to support diverse quality-of-service requirements over one interface. Thus, a true end-to-end solution is expected for multiservice broadband networking.
See also: Data communications; Integrated circuits; Integrated services digital network (ISDN); Local-area networks; Microprocessor; Optical communications; Packet switching; Semiconductor memories; Telephone service; Wide-area networks

New technology uses human body for broadband networking
Your body could soon be the backbone of a broadband personal data network linking your mobile phone or MP3 player to a cordless headset, your digital camera to a PC or printer, and all the gadgets you carry around to each other.
These personal area networks are already possible using radio-based technologies, such as Wi-Fi or Bluetooth, or just plain old cables to connect devices. But NTT, the Japanese communications company, has developed a technology called RedTacton, which it claims can send data over the surface of the skin at speeds of up to 2Mbps -- equivalent to a fast broadband data connection.
Using RedTacton-enabled devices, music from an MP3 player in your pocket would pass through your clothing and shoot over your body to headphones in your ears. Instead of fiddling around with a cable to connect your digital camera to your computer, you could transfer pictures just by touching the PC while the camera is around your neck. And since data can pass from one body to another, you could also exchange electronic business cards by shaking hands, trade music files by dancing cheek to cheek, or swap phone numbers just by kissing.
NTT is not the first company to use the human body as a conduit for data: IBM pioneered the field in 1996 with a system that could transfer small amounts of data at very low speeds, and last June, Microsoft was granted a patent for "a method and apparatus for transmitting power and data using the human body."
But RedTacton is arguably the first practical system because, unlike IBM's or Microsoft's, it doesn't need transmitters to be in direct contact with the skin -- they can be built into gadgets, carried in pockets or bags, and will work within about 20cm of your body. RedTacton doesn't introduce an electric current into the body -- instead, it makes use of the minute electric field that occurs naturally on the surface of every human body. A transmitter attached to a device, such as an MP3 player, uses this field to send data by modulating the field minutely in the same way that a radio carrier wave is modulated to carry information.
Receiving data is more complicated because the strength of the electric field involved is so low. RedTacton gets around this using a technique called electric field photonics: A laser is passed though an electro-optic crystal, which deflects light differently according to the strength of the field across it. These deflections are measured and converted back into electrical signals to retrieve the transmitted data.
An obvious question, however, is why anyone would bother networking though their body when proven radio-based personal area networking technologies, such as Bluetooth, already exist? Tom Zimmerman, the inventor of the original IBM system, says body-based networking is more secure than broadcast systems, such as Bluetooth, which have a range of about 10m.
"With Bluetooth, it is difficult to rein in the signal and restrict it to the device you are trying to connect to," says Zimmerman. "You usually want to communicate with one particular thing, but in a busy place there could be hundreds of Bluetooth devices within range."
As human beings are ineffective aerials, it is very hard to pick up stray electronic signals radiating from the body, he says. "This is good for security because even if you encrypt data it is still possible that it could be decoded, but if you can't pick it up it can't be cracked."
Zimmerman also believes that, unlike infrared or Bluetooth phones and PDAs, which enable people to "beam" electronic business cards across a room without ever formally meeting, body-based networking allows for more natural interchanges of information between humans.
"If you are very close or touching someone, you are either in a busy subway train, or you are being intimate with them, or you want to communicate," he says. "I think it is good to be close to someone when you are exchanging information."

RedTacton transceivers can be treated as standard network devices, so software running over Ethernet or other TCP/IP protocol-based networks will run unmodified.
Gordon Bell, a senior researcher at Microsoft's Bay Area Research Center in San Francisco, says that while Bluetooth or other radio technologies may be perfectly suitable to link gadgets for many personal area networking purposes, there are certain applications for which RedTacton technology would be ideal.
"I recently acquired my own in-body device -- a pacemaker -- but it takes a special radio frequency connector to interface to it. As more and more implants go into bodies, the need for a good Internet Protocol connection increases," he says.
In the near future, the most important application for body-based networking may well be for communications within, rather than on the surface of, or outside, the body.
An intriguing possibility is that the technology will be used as a sort of secondary nervous system to link large numbers of tiny implanted components placed beneath the skin to create powerful onboard -- or in-body -- computers.



References
http://www.mobileinfo.com
http://www.infoworld.com
http://www.ericsson.com
http://www.nist.gov
http://accessscience.com
http://searchtelecom.techtarget.com
http://en.wikipedia.org

if any problem & question please give coment 

How Does A Refrigerator Work?  

Posted by technology2day


How Does A Refrigerator Work?
Now here's a cool idea: a metal box that helps your food last longer! Have you ever stopped to think how a refrigerator keeps cool, calm, and collected even in the blistering heat of summer? Food goes bad because bacteria breed inside it. But bacteria grow less quickly at lower temperatures, so the cooler you can keep food, the longer it will last. A refrigerator is a machine that keeps food cool with some very clever science. All the time your refrigerator is humming away, liquids are turning into gases,  water is turning into ice, and your food is staying deliciously fresh. Let's take a closer look at how a refrigerator works!

How to move something you can't even see
Suppose your chore for today is to empty a stable full of rank smelling horse manure. Not the nicest of jobs, so you'll want to do it as quickly as possible. You won't be able to move it all at once, because there's too much of it. To get the job done fast, you need to move as much manure as you can in one go. The best thing to do is use a wheelbarrow. Pile the manure up into the barrow, wheel the barrow outside, and then empty the manure into a pile in the stable yard. With a few of these trips, you can shift the manure from inside the stable to outside.
Moving something you can see is easy. But now let's give you a harder chore. Your new task is to move the heat from the inside of a refrigerator to the outside to keep your food fresh. How can you move something you can't see? You can't use a wheelbarrow this time. Not only that, but you can't open the door to get at the heat inside, or you'll let the heat straight back in again. Your mission is to remove the heat, continually, without opening the door even once. Tricky problem, eh? But it's not impossible—at least not if you understand the science of gases

How to move heat with a gas
Let's step sideways a moment and look at how gases behave. If you've ever pumped up the tires on a bicycle, you'll know that a bicycle pump soon gets quite warm. The reason is that gases heat up when you compress (squeeze) them. To make the tire support the weight of the bicycle and your body, you have to squeeze air into it at a high pressure. Pumping makes the air (and the pump it passes through) a little bit hotter. Why? As you squeeze the air, you have to work quite hard with the pump. The energy you use in pumping is converted into potential energy in the compressed gas: the gas in the tire is at a higher pressure and higher temperature than the cool air around you. If you squeeze a gas into half the volume, the heat energy its molecules contain fills only half as much space, so the temperature of the gas rises (it gets hotter).
What happens if you release a gas that's stored at high pressure? When you spray an aerosol air freshener, you've probably noticed that the spray is really cold—for exactly the opposite reason that a bicycle pump gets hot. When you release the gas, it is suddenly able to expand and occupy much more volume. The heat energy its molecules contain is now divided over a much bigger volume of space, so the temperature of the gas falls (it gets cooler).


The heating and cooling cycle
By compressing gases, we make them hotter; by letting them expand, we make them cooler. How can we use this handy bit of physics to shift heat from the inside of a refrigerator? Suppose we made a pipe that was partly inside a refrigerator and partly outside it, and sealed so it was a continuous loop. And suppose we filled the pipe with a gas. Inside the refrigerator, we could make the pipe gradually get wider, so the gas would expand and cool as it flowed through it. Outside the refrigerator, we could have something like a bicycle pump to compress the gas and release its heat. If the gas flowed round and round the loop, expanding when it was inside the refrigerator and compressing when it was outside, it would constantly pick up heat from the inside and carry it to the outside like a heat conveyor belt.

And, surprise surprise, this is almost exactly how a refrigerator works. There are some extra details worth noting. Inside the refrigerator, the pipe expands through a nozzle known as an expansion valve. As the gas passes through it, it cools dramatically. This bit of science is sometimes known as the Joule-Thomson (or Joule-Kelvin) effect for the physicists who discovered it, James Prescott Joule (1818–1889) and William Thomson (Lord Kelvin, 1824–1907). You won't be surprised to discover that the compressor outside the refrigerator is not really a bicycle pump! It's actually an electrically powered pump. It's the thing that makes a refrigerator hum every so often. The compressor is attached to a grill-like device called a condenser (a kind of thin radiator behind the refrigerator) that expels the unwanted heat. Finally, the gas that circulates round the pipe is actually a specially designed chemical that alternates between being a cool liquid and a hot gas. This chemical is known as the coolant or refrigerant.

Here's what's happening inside your refrigerator as we speak! The left-hand picture shows what's happening on the inside back wall of the chiller cabinet. The right-hand picture shows what's going around the back of the fridge on the outside.
1. The coolant is a liquid as it enters the expansion valve. As it passes through, the sudden drop in pressure makes it expand, cool, and turn into a gas (just like a liquid aerosol turns into a cool gas when you spray it out of a can).
2. As the coolant flows around the chiller cabinet (usually around a pipe buried in the back wall), it absorbs and removes heat from the food inside.
3. The compressor squeezes the coolant, raising its temperature and pressure. It's now a hot, high-pressure gas.
4. The coolant flows through thin pipes on the back of the fridge, giving out its heat and cooling back into a liquid as it does so.
5. The coolant flows back into the expansion valve and the cycle repeats itself. So heat is constantly picked up from inside the refrigerator and put down again outside it.



Norcold refrigerator circuit board replacement

LG Refrigerator Circuit diagram Parts

Old Refrigerators
If you look at the back or bottom of an older refrigerator, you'll see a long thin tube that loops back and forth. This tube is connected to a pump, which is powered by an electric motor.
Inside the tube is Freon, a type of gas. Freon is the brand name of the gas. This gas, chemically is called Chloro-Flouro-Carbon or CFC. This gas was found to hurt the environment if it leaks from refrigerators. So now, other chemicals are used in a slightly different process (see next section below).
CFC starts out as a liquid. The pump pushes the CFC through a lot of coils in the freezer area. There the chemical turns to a vapor. When it does, it soaks up some of the heat that may be in the freezer compartment. As it does this, the coils get colder and the freezer begins to get colder.
In the regular part of your refrigerator, there are fewer coils and a larger space. So, less heat is soaked up by the coils and the CFC vapor.
The pump then sucks the CFC as a vapor and forces it through thinner pipes which are on the outside of the refrigerator. By compressing it, the CFC turns back into a liquid and heat is given off and is absorbed by the air around it. That's why it might be a little warmer behind or under your refrigerator.
Once the CFC passes through the outside coils, the liquid is ready to go back through the freezer and refrigerator over and over.


Today's Refrigerators
Modern refrigerators don't use CFC because CFCs are harmful to the atmosphere if released. Instead they use another type of gas called HFC-134a, also called tetrafluoroethane. HFC turns into a liquid when it is cooled to -15.9 degrees Fahrenheit (-26.6 degrees Celsius).
A motor and compressor squeezes the HFC. When it is compressed, a gas heats up as it is pressurized. When you pass the compressed gas through the coils on the back or bottom of a modern refrigerator, the warmer gas can lose its heat to the air in the room.
Remember the law of thermodynamics.
As it cools, the HFC can change into a liquid because it is under a high pressure.
The liquid flows through what's called an expansion valve, a tiny small hole that the liquid has to squeeze through. Between the valve and the compressor, there is a low-pressure area because the compressor is pulling the ammonia gas out of that side.
When the liquid HFC hits a low pressure area it boils and changes into a gas. This is called vaporizing.
The coils then go through the freezer and regular part of the refrigerator where the colder liquid in the coil pulls the heat out of the compartments. This makes the inside of the freezer and entire refrigerator cold.
The compressor sucks up the cold gas, and the gas goes back through the same process over and over.


How Does the Temperature Stay the Same Inside?
A device called a thermocouple (it's basically a thermometer) can sense when the temperature in the refrigerator is as cold as you want it to be. When it reaches that temperature, the device shuts off the electricity to the compressor.
But the refrigerator is not completely sealed. There are places, like around the doors and where the pipes go through, that can leak a little bit.
So when the cold from inside the refrigerator starts to leak out and the heat leaks in, the thermocouple turns the compressor back on to cool the refrigerator off again.
That's why you'll hear your refrigerator compressor motor coming on, running for a little while and then turning itself off.
Today's refrigerators, however, are very energy efficient. Ones sold today use about one-tenth the amount of electricity of ones that were built 20 years ago. So, if you have an old, old refrigerator, it's better to buy a new one because you'll save money (and energy) over a long period of time.


For more information go to:
• Argone National Laboratory - Ask A Scientist (http://newton.dep.anl.gov/newton/askasci/1993/eng/ENG30.HTM)

• Mr. Hand's 8th Grade Science Site (www.mansfieldct.org/schools/mms/staff/hand/heatrefrig.htm)

• How Stuff Works - Refrigerator (www.howstuffworks.com/refrigerator.htm)

• Science Treasure Trove - refrigerator page (www.education.eth.net/acads/treasure_trove/refrigerator.htm)

Reference

1.www.explainthatstuff.com
2.www.energyquest.ca.gov

INTRODUCTION TO MICROCONTROLLERS  

Posted by technology2day

INTRODUCTION TO MICROCONTROLLERS

What are micro controllers? They are what their name suggests. Today they can be found in almost any complex electronic device - from portable music devices to washing machines to your car. They are programmable, cheap, small, can handle abuse, require almost zero power, and there are so many variaties to suit every need. This is what makes them so useful for robotics - they are like tiny affordable computers that you can put right onto your robot.
Augmented Microcontrollers and Development Boards
In a pure sense, a micro controller is just an IC (integrated circuit, or a black chip thing with pins coming out of it). However it is very common to add additional external components, such as a voltage regulator, capacitors, LEDs, motor driver, timing crystals, rs232, etc to the basic IC. Formally, this is called an augmented microcontroller. But in reality, most people just say 'microcontroller' even if it has augmentation. Other abbreviations would be ucontroller and MicroController Unit (MCU). Usually when I say 'microcontroller' what I really mean to say is 'augmented microcontroller.'
As a beginner it is probably best to buy an augmented micro controller. Why? Well because they have tons of goodies built onto them that are all assembled and debugged for you. They also often come with tech support, sample code, and a community of people to help you with them. My micro controller parts list shows the more popular types that you can buy. They tend to cost from $30 to $150 depending on the features. This will give you a good introductory to micro controller programming without having to be concerned with all the technical stuff.
In the long term however you should build your own augmented microcontroller so that you may understand them better. The advantage to making your own is that it will probably cost you from $10-$30.

Between getting a full augmented board and doing it yourself is something called a development board. These boards come pre-augmented with just the bare basics to get you started. They are designed for prototyping and testing of new ideas very quickly. They typically cost between $15 and $40.


What comes with the IC?
There is a huge variety of microcontrollers out on the market, but I will go over a few common features that you will find useful for your robotics project.

For robots, ore important than any other feature on a microcontroller, is the I/O ports. Input ports are used for taking in sensor data, while output is used for sending commands to external hardware such as servos. There are two types of I/O ports, analog and digital.

Analog Input Ports
Analog Ports are necessary to connect sensors to your robot. Also known as an analog to digital converter (ADC), they recieve analog signals and convert them to a digital number within a certain numerical range.

So what is analog? Analog is a continuous voltage range and is typically found with sensors. However computers can only operate in the digital realm with 0's and 1's. So how does a microcontroller convert an analog signal to a digital signal?

First, the analog is measured after a predefined period of time passes. At each time period, the voltage is recorded as a number. This number then defines a signal of 0's and 1's as shown:




The advantage of digital over analog is that digital is much better at eliminating background noise. Cell phones are all digital today, and although the digital signal is less representative than an analog signal, it is much less likely to degrade since computers can restore damaged digital signals. This allows for a clearer output signal to talk to your mom or whoever. MP3's are all digital too, usually encoded at 128 kbps. Higher bit rates obviously mean higher quality because they better represent the analog signal. But higher bit rates also require more memory and processing power.

Most microcontrollers today are 8 bit, meaning they have a range of 256 (2^8=256). There are a few that are 10 bit, 12 bit, and even 32 bit, but as you increase precision you also need a much faster processor.

What does this bit stuff mean for ADC? For example, suppose a sensor reads 0V to an 8 bit ADC. This would give you a digital ouput of 0. 5V would be 255. Now suppose a sensor gave an output of 2.9V, what would the ADC output be?

Doing the math:

2.9V/5V = X/255
X = 2.9*255/5 = 148

So how do you use an analog port? First make sure your sensor output does not exceed your digital logic voltage (usually 0V -> 5V). Then plug that output directly to the analog port.

This bit range could also be seen as a resolution. Higher resolutions mean higher accuracy, but occasionally can mean slower processing and more succeptability to noise. For example, suppose you had a 3 bit controller which has a range of 2^3=8. Then you have a distance sensor that outputed a number 0->7 (a total of 8) that represents the distance between your robot and the wall. If your sensor can see only 8 feet, then you get a resolution of 1 bit per foot (8 resolution / 8 feet = 1). But then suppose you have an 8 bit controller, you would get 256/8=32 ~ 1 bit per centimeter - way more accurate and useful! With the 3 bit controller, you could not tell the difference between 1 inch and 11 inches.

Digital I/O Ports
Digital ports are like analog ports, but with only 1 bit (2^1=2) hence a resolution of 2 - on and off. Digital ports obviously for that reason are rarely used for sensors, except for maybe on/off switches . . . What they are mostly used for is signal output. You can use them to control motors or LED's or just about anything. Send a high 5V signal to turn something on, or a low 0V to turn something off. Or if you want to have an LED at only half brightness, or a motor at half speed, send a square wave. Square waves are like turning something on and off so fast that its almost like sending out an analog voltage of your choice. Neat, huh?

This is an example of a square wave for PWM:
These squarewaves are called PWM, short for pulse width modulation. They are most often used for controlling servos or DC motor H-Bridges.

Also a quick side note, analog ports can be used as digital ports.

Serial Communication, RS232, UART
A serial connection on your microcontroller is very useful for communication. You can use it to program your controller from a computer, use it to output data from your controller to your computer (great for debugging), or even use it to operate other electronics such as digital video cameras. Usually the microcontroller would require an external IC to handle everything, such as an RS232. To learn more, read my microcontroller UART tutorial.

Timers
A timer is the method by which the microcontroller measures the passing of time - such as for a clock, sonar, a pause/wait command, timer interrupts, etc. To learn more, read my microcontroller timer tutorial.

I^2C
I^2C (pronounced 'I-squared-C') is also useful for communicating, but I have never used it. Just make sure your controller has some method of communicating data to you for easy and effective debugging/testing of your robot programs. Its actually somewhat complicated, but usually the manufacturer has simplified it so all you have to do is plug-n-play and do a few print statements. To learn more, read the I^2C tutorial.

Motor Driver
To run a DC motor you need to either have an H-Bridge or a Motor Driver IC. The IC is great for small robots that do not exceed 1 or 2 amps per motor and the rated motor voltage is not higher than about 12V. The homemade H-Bridge would need to be used if you wanted to exceed those specs. There are a few H-Bridge controllers commercially available to buy, but usually they are way too expensive and are designed for battlebot type robots. The IC is small, very cheap, and can usually handle two motors. I highly recommend opting for the IC. Also, do not forget to put a heatsink onto the motordriver. Motordrivers give off pretty fireworks when they explode from overheating =)

Another interesting note, you can stack IC's in parallel to double the allowable current and heat dissipation. Theoretically you can stack as many as you want, as long as the current is high enough to still operate the logic of the IC. This works for voltage regulators too.

Output Indicators
Im referring to anything that can be used for debugging by communicating information to you. LED's, buzzers, LCD screens, anything that gives output. The better the indicator, the easier the debugging. The best indicator is to have your robot tethered and print or data log sensor and action data to your computer, but it isn't always possible to have your robot tethered.

Programming Languages
The lowest form of programming languages is the machine language. Microcontrollers need to be programmed with this.

An example of machine language:

3A 10 51
E6 DF
32 38 00

Obviously neither of us could ever memorize what all those seemingly random numbers and letters do, so we would program in a higher language that makes much more sense:

If (language = = easy)
print "yay!";

These higher languages would then be compiled automatically into a machine language, which then you can upload into your robot. Probably the easiest language to learn would be BASIC, with a name true to itself. The BASIC Stamp microcontroller uses that language. But BASIC has its limitations, so if you have any programming experience at all, I recommend you program in C. This language was the precurser to C++, so if you can already program in C++, it should be really simple for you to learn. What complicates this is that there is no standard to programming microcontrollers. Each has its own features, its own language, its own compiler, and its own uploading to the controller method.

This is why I do not go into too much detail because there are too many options out there to talk about. The support documents that come with the controllers should answer your specific questions. Also, if you decide to use a PIC, understand that the compiler program (at least the good ones) can cost hundred of dollars. Most microcontrollers also require a special interface device between your computer and the chip for programming which could also cost from $10-$40.

Costs
With possibly the exception of DC motors, the microcontroller is the most expensive part of your robot. There is just no escaping the costs, especially for the beginner. But remember, after buying all this for your first robot, you do not need to buy any of it again as you can reuse everything. So here is the breakdown of costs. The chip itself, without augmentation, would only cost dollars. But understand the chip is useless without the augmentation, so you would need to do it yourself if you do not buy it already augmented. This could potentially cost just as much with the augmentation, and could cause you many frustrations.

If however you are more experienced (and for some odd reason still reading this), you can customize your own circuit to do exactly what you want. Why have a motordriver when you are only using servos anyway? If you decide to buy an augmented MCU, the cost will range from about $50-$150. To compile your program, you would need to get special compiling software. Atmel and BASIC Stamps have free compilers. PIC's however have fairly expensive compilers. There are some free ones available online, but they are of poor quality in my opinion. CCSC PIC C compiler is about $125, but I think it is worth getting if you are going to use PIC's.

You will also need an uploader to transfer the program from your computer to the chip. This generally requires more special software and a special interface device. The Cerebellum PIC based controller has this built in which is really nice and convienent, but for any others expect to spend from $10-$40. People often opt to just make their own as the circuit isnt too complicated.

As a prototyper, what you probably want most is a MCU development board. These augmented microcontrollers are designed for the prototyper in mind. To find these augmented MCU's, do a search for 'pic development board,' 'atmel development board,' 'stamp development board,' etc.


Conclusion
If you have more specific questions about microcontrollers, or would like me to go into more detail about something, just write me and I will.

Update
I've created a microcontroller product, called the Axon, that's both easy to learn and powerful in features. I use it for all my robot creations now, and will continuously release source code updates and tutorials using it. Feel free to check it out!

Axon Microcontroller

Neural Networks and Machine Learning  

Posted by technology2day

Neural networking is the science of creating computational solutions modeled after the brain.  Like the human brain, neural networks are trainable-once they are taught to solve one complex problem, they can apply their skills to a new set of problems without having to start the learning process from scratch. 

The Lab seeks models which combine the best aspects of neural network mechanisms and symbolic artificial intelligence machine learning paradigms. Neural networks and machine learning algorithms represent a dramatic departure from conventional programming techniques. Rather than explicitly build a program to solve a problem, examples, called "training sets," of a type of problem are given, which the neural network "learns" how to solve. The network can then be presented with new examples on which it was not trained, known as "test sets," and it will use the skills it gained from the training set to formulate solutions. For many tasks, neural networks actually outperform human experts. For example, a doctor must go through years of training to learn to diagnose a disease on the basis of a set of symptoms. A neural network, in comparison, would diagnose a disease by first learning from a training set made up of symptoms with the correct diagnoses, and then would formulate a diagnosis when presented with new symptoms on which it was not trained.
Some of the problems considered in the Neural Networks and Machine Learning Laboratory are control problems, such as controlling a large flock of independent robots. Just as a person who understands how to drive a car can transfer that knowledge to driving a truck, a computer which controls one robot should quickly learn how to control a whole flock. Other problems being solved in the lab are planning and classification tasks. One project would allow computers to be able to recognize individuals' facial features and thus pick individuals out of photographs. Other applications include automatically sorting music libraries and classifying species of plants and animals.

Basic Science and Mathematics: Which Topics Are Most Needed?  

Posted by technology2day

Basic Science and Mathematics: Which Topics Are Most Needed?
The engineering technology faculty at Wake Technical College undertook a study in the fall of 1978 to determine if our basic science and mathematics offerings were relevant to graduates' needs on the job. Since 1964, when Wake admitted its first engineering technology students, the engineering technology division had expanded to six fully accredited two year associate degree curricula with over 200 students enrolled. Feedback from employers and graduates indicates that the curricula are equipping graduates with the necessary entry ¬level skills. The explosion in technological information, however, has placed demands on two year ET curricula to include more state of the art subjects at the expense of fundamental science and mathematics subjects. Since only a limited number of topics can be covered in two years, ET curriculum planners must scrutinize subject matter to ensure that it does help to prepare students for jobs as science and engineering technicians, and to avoid technical obsolescence as their field changes.
We surveyed graduates of Wake's six ET programs and their employers to learn what they considered the basic science and mathematics topics most needed by engineering technicians on the job. We also sought to obtain comments about topics not listed on the survey which may be needed.
Of the 697 participants selected to receive our questionnaire, 470 had graduated from one of the six ET programs at Wake from 1969 through 1977, and 227 were employers of graduates of these programs. The questionnaire was drafted by a group of department heads and a second group of people involved with two and four year ET programs nationwide.

Results

Table 1 summarizes the basic science and mathematics topics needed by engineering technicians, as determined by the 29 percent of the enployers and 23 percent of the graduates who responded to the questionnaire. The findings are based on response patterns for a given item in which at least the group of employers or the group of graduates agreed with the combined group of respondents by a majority response in eitherthe essential (E), desirable (X) or not needed ( ) categories.
1) The strongest support for the items under mechanics came from respondents in the architectural, chemical, civil engineering, and industrial engineering technologies.
2) The items under the fundamentals of electricity/electronics were unani¬mously supported by respondents in the computer, electronic engineering, and industrial engineering technologies.
3) All groups of respondents supported the study of the general theory of



light, but only the electronic engineering technology respondents indicated support for all the items under light.
4) The study of the items under sound was supported by three groups of respondents: architectural, computer, and electronic engineering technologies.
5) All groups of respondents supported the study of heat.
6) Modern physics was important only to responding chemical technicians and electronic engineering technicians.
7) Only the chemical technology respondents supported the study of the chemistry subjects.
8) Items listed under biology were needed only by chemical technicians.
9) Civil engineering technicians were the only group who needed a knowledge of all the items under geology.
10) The two items under data processing were important to all but architectural technicians.
11) The study of algebra, trigonometry, logarithms, geometry, analytic geometry, and calculus was supported by all respondents.
12) The chemical, civil, electronic, and industrial engineering technology respondents indicated support for the items under statistics.

At the end of the questionnaire, the study participants were given the opportunity to make further comments, such as to be more specific with regard to certain topics or to list further topics they thought should be included.
In general, their comments addressed specific skills and knowledge required by technicians to do well in their jobs. The comments did reflect an awareness of the rapidly changing requirements in engineering technology and an appreciation of the value of basic science and mathematics in keeping abreast of these changes.
In addition to determining the basic science and mathematics topics most needed by engineering technicians, the study revealed several other trends:
Graduates and employers in all six engineering technology fields indicated that a knowledge of mathematics ranging from algebra to calculus was important for engineering technicians. The extent to which a certain mathematical topic was important depended upon its direct usefulness in solving day to day problems on the job. Support for the study of other mathematical topics resulted from a need for a foundation in mathematics which would afford the technician an opportunity to keep abreast of technological changes, as well as to develop analytical skills.
The respondents believed that an engineering technician needs a knowledge of basic science topics, which provide a foundation for applying skills and knowledge in their particular field. For example, chemical technicians indicated support for a study of the basic science of chemistry. Electronic technicians, on the other hand, indicated an interest in the fundamentals of electricity and electronics that explain the electrical phenomena associated with the application of electronics and electricity.
In the case of data processing, all participants except those in architectural technology believed that a knowledge of at least one scientific programming language was important. In addition, respondents indicated an interest in the study of COBOL.
Analysis of the response patterns of employers and graduates showed that graduates were more supportive of a knowledge of basic science and mathematics topics. Employers, on the other hand, tended to support only those topics that

were immediately useful in solving day to day problems. This difference in response patterns can be attributed to the desire of engineering technicians to stay abreast of technological change, while their employers appear interested primarily in the knowledge and skills that contribute to immediate productivity.