LMF Assembly Sector
5G.1 Assembly Sector Components and Technology Assessment
After raw lunar soil has been processed by the chemical processing sector into metallic and nonmetallic elements, and the parts fabrication sector has used these substances to manufacture all parts needed for LMF construction activities (growth, replication, or production), it is the job of the assembly sector to accept individual completed parts and fit them together to make working machines and automated subsystems themselves capable of adding to the rate of construction activities. A number of basic functions are required to perform sophisticated assembly operations. These are outlined in the assembly sector operations flowchart in figure 5.18. Each functional subsystem is discussed briefly below.
Parts produced by the fabrication sector are delivered either to inventory or directly to the assembly sector via mobile Automated Transport Vehicle (ATV) which runs on wheels or guide tracks. Parts are also retrieved from inventory by the ATVs. All retrieved or delivered parts are placed in segregated bins as input to the automated assembly system.
Parts Recognition/Transport/Presentation (RTP) System
The Recognition/Transport/Presentation (RTP) system is responsible for selecting the correct parts from the input bins, transporting them to within the reach of assembly robots, and presenting them in a fashion most convenient for use by the assembly robots. This will require a manipulator arm, vision sensing, probably tactile sensing, and advanced "bin-picking" software.
Early research concentrated on the identification and handling of simple blocks. For instance, at Hitachi Central Research Laboratory prismatic blocks moving on a conveyor belt were viewed, one at a time, with a television camera and their position and orientation determined by special software. Each block was then tracked, picked up with a suction-cup end-effector, and stacked in orderly fashion under the control of a minicomputer (Yoda et al., 1970). In another early experiment performed at Stanford University, a TV camera with color filters and a manipulator arm was developed that could look at the four multicolored blocks of an "instant Insanity" puzzle, compute the correct solution to the puzzle, and then physically stack the blocks to demonstrate the solution (Feldman et al., 1974).
At the University of Nottingham, the identity, position, and orientation of flat workpieces were determined one at a time as they passed under a down-looking TV camera mounted in a vertical turret much like microscope lens objectives. A manipulator then rotated into a position coaxial with the workpiece and acquired it (Heginbotham et al., 1972). More recently, software developed by General Motors Laboratories can identify overlapping parts laid out on a flat surface. The computer analyzes each part, calculates geometric properties, then creates line drawing models of each object in the scene and memorizes them. Subsequently, objects coming down the conveyor belt which resemble any Of the memorized parts in shape - even if only small sections of a part can be seen or the lighting is poor - will be identified correctly by the system (Perkins, 1977).
In a recent series of experiments performed at SRI International, workpieces transported by an overhead conveyor were visually tracked. The SRI Vision Module TV camera views a free-swinging hanging casting through a mirror fixed on a table at 45°. An LSI-11 microprocessor servos the table in the x-y plane to track the swinging part. If a part is swinging over a 20 cm arc at about 0.5 Hz, the tracking accuracy is better than 1 cm continuously (Nitzan 1979; Nitzan et al., 1979; Rosen. 1979). A moderate research and development program could produce an arm capable of tracking and grabbing a swinging part.
At Osaka University a machine vision system consisting of a television camera coupled to a minicomputer can recognize a variety of industrial parts (such as gasoline engine components) by comparing visual input of unknown parts with stored descriptions of known parts. The system can be quickly trained to recognize arbitrary new objects, with the software generating new internal parts models automatically using cues provided by the operator. The present system can recognize 20-30 complex engine parts as fast as 30 sec/part, and new objects can be learned in 7 min (Yachida and Tsuji, 1975). Another system developed at SRI International can determine the identity, position, and orientation of workpieces placed randomly on a table or moving conveyor belt by electrooptical vision sensing, then direct a Unimate industrial robot arm to pick up the workpiece and deliver it to the desired destination (Agin and Duda, 1975).
Contact sensing may also be used in parts recognition. Takeda (1974) built a touch sensing device consisting of two parallel fingers each with an 8 X 10 needle array free to move in and out normal to the fingers and a potentiometer to measure the distance between the fingers. As the fingers close, the needles contact an object's surface contour in a sequence that describes the shape of the object. Software was developed to recognize simple objects such as a cone.
Of direct relevance to the lunar self-replicating factory RTP system is the "bin-picking" research conducted at SRI International. This involves the recognition and removal of parts from bins where they are stored by a robot manipulator under computer control. Three classes of "bins'' may be distinguished: (1) workpieces highly organized spatially and separated, (2) workpieces partially organized spatially and unseparated, and (3) workpieces in completely random spatial organization. Simple machine vision techniques appear adequate for bin picking of the first kind, essentially state-of-the-art, Semiorganized parts bins (second class) can be handled by state-of-the-art techniques, except that picking must be separated into two stages. First, a few parts are removed from the bin and placed separately on a vision table. Second, standard identification and manipulation techniques are employed to pick up and deliver each part to the proper destination. Parts bins of the third class, jumbled or random pieces, require "a high level of picture processing and interpretive capability" (Rosen, 1979). The vision system has to cope with poor contrast, partial views of parts, an infinite number of stable states, variable incident and reflected lighting, shadows, geometric transformations of the image due to variable distance from camera lens to part, etc., a formidable problem in scene analysis. Some innovations have been made at General Motors in this area (Perkins, 1977), but researchers believe that progress using this technique alone will be slow, and that practical implementation will require considerably faster and less expensive computational facilities than are presently available (Rosen, 1979).
At SRI an end-effector with four electromagnets and a contact sensor has been built to pick up four separate castings from the top of a jumbled pile of castings in a bin. A Unimate transports the four castings to a backlighted table and separates them. Then a vision subsystem determines stable states, position, and orientation, permitting the Unimate gripper to pick up each casting individually and transfer it to its proper destination (Nitzan et al., 1979).
Although clearly more work needs to be done, a great deal of progress already has been made. It is possible to imagine a 5-10 year R&D effort which could produce the kind of RTP system required for the LMF assembly sector. Considerably more effort will be required to achieve the level of sophistication implied by Marvin Minsky's reaction to a discussion of current bin-picking and conveyor belt picking technology: "On this question of the variety of parts on assembly lines, it seems to me that assembly lines are silly and when we have good hand-eye robots, they will usually throw the part across the factory to the machine who wants it and that machine will catch it" (Rosen, 1979). The RTP system for the self-replicating LMF does not require this extreme level of robot agility.
Parts Assembly Robots
Once the correct parts have been identified, acquired, and properly presented, assembly robots must put them together. These assemblies - electric motors, gearboxes, etc. - are not yet working machines but rather only major working components of such machines. Thus it may be said that assembly robots assemble simple parts into much more complex "parts."
There has been a certain amount of basic research on aspects of programmable assembly. At MIT in 1972 a program called COPY could look at a simple structure built of children's building blocks, then use a manipulator to physically build a mirror image of the structure to prove its "understanding" of the block shapes and orientations. It would do this by withdrawing the blocks it needed from a collection of objects in its field of view, randomly spread out on a table (Winston, 1972). In Japan, a Hitachi robot called HIVIP could perform a similar task by looking at a simple engineering drawing of the structure rather than at the physical structure itself (Ejiri et al., 1971). In Edinburgh the FREDDY robot system could be presented with a heap of parts comprising a simple but disassembled model. Using its TV cameras and a manipulator, the system sorted the pieces, identified them correctly, then assembled the model. Assembly was by force and touch feedback, using a vise to hold partial assemblies, and parts recognition was accomplished by training (Ambler et al., 1975).
Research has also begun on the problems involved in fitting parts together or "parts mating." For instance, Inoue (1971) programmed a manipulator to insert a peg into a hole using force sensing at the manipulator joints. A more sophisticated version was later built by Goto at Hitachi Central Research laboratory. This version consisted of a compliant wrist with strain gauge sensors to control the insertion of a 1.2-cm polished cylinder into a vertical hole with a 7 to 20 um clearance in less than 3 sec (Goto et al., 1974).
Besides fitting, assembly operations also include fastening. The most common methods include spot welding, riveting, are welding, bolting, nailing, stapling, and gluing, all of which have been automated to some degree. Numerical-control (N/C) riveting machines have replaced human riveters in the production of jetliner wings at Boeing Aerospace (Heppenheimer, 1977). At Westinghouse Electric Corporation a four-joint Programmable manipulator under minicomputer control performs are welding along curved trajectories (Abraham and Shum, 1975). According to information gleaned from Ansley (1968) and Clarke (1968), the Gemini spacecraft required 0.15 m/kg of seam welds and 6.9 spot welds/kg. Thus, for a 100-ton LMF seed equal to the Gemini capsule in its welding requirements, 15,000 m of seam welding would be required. This should take about a month of continuous work for a dedicated 5-10 kW laser welder (see appendix 5F). Another alternative is to make positive use of vacuum welding. Surfaces of parts to be fastened would be cleaned, then pressed gently together, causing a cold weld if they are made of tile same or similar metallic material. Cast basalt end-effectors will probably be required for handling in this case.
At a high level of sophistication, assembly of certain well-defined machines from basic parts has been studied. Abraham and Beres (1976) at Westinghouse have described a product line analysis in which assembly line automation sequences were considered for constructing ten candidate assemblies, including a continuous operation relay (300 assembly steps), low voltage bushings (5 parts), W-2 low voltage switches (35 parts), fuse assembly (16 steps), and a small motor rotor assembly (16 steps). The tasks and implementation list for a sample motor rotor assembly is shown in table 5.19. This research has evolved into the Westinghouse APAS System, which uses state-of-the-art industrial robots and can automatically assemble complete electric motors of eight different classes representing 450 different motor styles discovered in a broad survey of all motors (van Cleave, 1977).
Table 5.19.- Assembly Tasks For A One-Robot Configuration, To Assemble Small Motor Rotors
|Sequential tasks||Task implementation methods|
|1. Heat core in oven||New vertical in-line oven|
|2. Place shaft in hot core||Dedicated assembly unit|
|3. Quench cool||Water spray|
|4. Transfer subassembly to in-line conveyor||Pick and place device #1|
|5. Stake shaft||Automatic stake machine|
|6. Test subassembly||Automatic test device|
|7. (Optional - remove reject subassembly)||Computer-controlled robot #1|
|8. Retrieve switch from vision table|
|9. Place switch on shaft|
|10. Retrieve top sleeve|
|11. Place top sleeve on shaft|
|12. Press top sleeve and switch||Dedicated assembly units|
|13. Assemble bottom sleeve and press|
|14. Assemble rubber washers|
|15. Transfer subassembly to conveyor||Pick and place device #2|
|16. Assemble nylon washers||Dedicated assembly unit|
Other major industry and laboratory accomplishments include the following:
Clearly, a great deal of progress has been made, but much more remains to be made in all areas before an LMF-capable universal assembly system could be designed. Nitzan, (private communication, 1980) estimates such a system might become available commercially by the end of the present century at the present rate of development The amazing progress of the Japanese in developing "unmanned manufacturing" systems confirms this estimate, and suggests that by the end of the present decade a serious effort to design a universal assembly system of the type required for the lunar SRS might be successful.
If the original LMF seed has about 106 parts which must be assembled within a replication time T = 1 year, then parts must be assembled at an average rate of 31 sec/part. If subassembly assembly is included with successive ranks of ten (i.e., 10 parts make a subassembly, then 10 subassemblies make a more complex subassembly, etc.), then 1.111111X106 assembly operations are required which is only 28 sec/part. This is about typical for assembly operations requiring 100% verification at each step, using state-of-the-art techniques. The Draper robot described earlier assembles 17 parts in 162 sec, or 9.5 sec/part, and the improvement to 60 sec for the whole alternator assembly task would decrease this to 3.5 sec/part, an order of magnitude less than the mean continuous rate required for successful LMF operation.
Assembly Inspection Robots
After parts have been assembled by assembly robots with 100% verification at each step, the final assembly must be inspected as a final check to ensure it has been correctly built from the correct parts. According to Rosen (1979), machine vision for inspection may be divided into two broad classes: (1) inspection requiring highly quantitative measurement, and (2) inspection that is primarily qualitative but frequently includes semiquantitative measures.
In the quantitative inspection class, machine vision may be used to inspect stationary and moving objects for proper size, angles, perforations, etc. Also, tool wear measurements may be made. The qualitative inspection class includes label reading, sorting based on shape, integrity, and completeness of the workpiece (burrs, broken parts, screws loose or missing, pits, cracks, warping, printed circuit miswiring), cosmetic, and surface finishes. Each type of defect demands the development of specialized software which makes use of a library of subroutines, each affecting the extraction and measurement of a key feature. In due course, this library will be large and be able to accommodate many common defects found in practice. Simple vision routines utilizing two-dimensional binary information can handle a large class of defects. However, three-dimensional information, including color and gray-scale, will ultimately be important for more difficult cases (Rosen, 1979).
With the SRI-developed vision module, a number of inspection tasks have been directed by computer. For example, washing machine water pumps were inspected to verify that the handle of each pump was present and to determine in which of two possible positions it was. A group of electrical lamp bases was inspected to verify that each base had two contact grommets and that these were properly located on the base. Round and rectangular electrical conduit boxes were inspected as they passed on a moving conveyor, the camera looking for defects such as missing knockouts, missing tabs, and box deformation (Nitzan, 1979).
An inspection system developed by Auto-Place, Inc. is called Opto-Sense. In one version, a robot brings the workpiece into the field of vision. Coherent laser light is programmed by reflection off small adjustable mirrors to pass through a series of holes and slots in the part. If all "good part" conditions are met, the laser light is received by the detector and the part is passed. In addition to looking at the presence or absence of holes and object shape, the laser system can also check for hole size and location, burrs or flash on parts, and many other conditions (Kirsch, 1976). Range-imaging by lasers is well suited for the task of inspecting the completeness of subassemblies (Nitzan et al., 1977).
An inspection system designed for an autonomous lunar factory would need an internal laser source, a three-dimensional scanning pattern, at least two detectors for simple triangulation/ranging, a vision system for assembly recognition and position/orientation determination, and a large library of parts and assemblies specifications so that the inspection system can determine how far the object under scrutiny deviates from nominal and a valid accept/ reject/repair decision may be made.
Electronics Assembly Robots
Electronics components, including resistors, capacitors, inductors, discrete semiconductor components (diodes, thyristors), and microelectronic "chips" (microprocessors, RAMs, ROMs, CCDs) are- produced by the Electronics Fabrication System in the fabrication sector. Aluminum wire, spun basalt insulation, and aluminum base plates are provided from the bulk or parts fabrication system described in appendix 5F. After these parts are properly presented to the electronics assembly robots, these robots must assemble the components into major working electronics systems such as power supplies, camera systems, mini/microcomputers CPUs, computer I/O units, bulk memory devices, solar cell panels, etc. Electronics assembly appears to require a technology considerably beyond the state-of-the-art.
Present techniques for automated electronics assembly extend mainly to automatic circuit board handling. For instance, Zagar Inc. uses an automatic PCB drilling machine, and Digital Systems Inc. has an N/C automatic drilling machine with four speeds for drilling four stacks of boards simultaneously (Ansley, 1968). A circuit-board assembly line at Motorola allows automatic insertion of discrete components into circuit boards - the plug-in modular 25-machine conveyor line applied 30,000 electrical connections per hour to printed circuit modules used in Motorola Quasar television sets (Luke, 1972). Using four specialized assembly machines developed for Zenith, a single operator can apply more than half a million electrical contacts to more than 25,000 PCBs in one 8-hr shift (Luke, 1972).
Probably one of the most advanced electronics assembly systems currently available is the Olivetti/OSAI SIGMA-series robots (Thompson, 1978). The minicomputer-controlled SIGMA/MTG two-arm model has eight degrees of freedom (total) and a positioning accuracy of 0.15 mm. In PCB assembly, boards are selected individually from a feeding device by a robot hand, then positioned in a holding fixture. This method frees both hands to begin loading integrated circuit (IC) chips into the boards. The robot hands can wiggle the ICs to make them fit if necessary. ICs are given a cursory inspection before insertion, and bad ones are rejected. Assembly rates of 12,500 IC/hr are normally achieved (50 IC/PCB and 250 PCB/hr) for each robot hand pair, 2-3 per human operator. The two arms are programmed to operate asynchronously and have built-in collision avoidance sensors. In other operations, different SIGMA-model robots assemble typewriter parts such as ribbon cartridges, typewriter key cap assemblies, and mechanical key linkages.
The SIGHT-1 computer vision system developed by General Motors' Delco Electronics Division locates and calculates the position of transistor chips during processing for use in car and truck high-energy ignition systems. It also checks each chip for structural integrity and rejects all defectives (Shapiro, 1978). The simple program logic for the IC chip inspection is shown in figure 5.45.
Figure 5.45. -
Program logic for the GM/Delco IC "chip" inspection system.
Figure 5.45. -
A most serious gap in current technology is in the area of inspection. There are few if any systems for automatic circuit verification - at present, inspection is limited to external integrity and structural irregularities or requires a human presence. At present, neither IC nor PCB performance checking is sufficiently autonomous for purposes of SRS.
Bin Packing for Warehouse Shipment
Bin packing (or crate loading for shipment) is a straightforward problem in robotics provided the parts and crate presentation difficulties have already been solved. SRI International has done a lot of work in this area. For example, using feedback from a proximity sensor and a triaxial force sensor in its "hand," a Unimate robot was able to pick up individual preassembled water pumps from approximately known positions and pack them neatly in a tote-box. In another experiment boxes were placed randomly on a moving conveyor belt; the SRI vision system determined the position and orientation of each box, and permitted a Unimate robot arm to pack castings into each box regardless of how fast the conveyor was moving (Rosen et al., 1978). At Hitachi Central Research Laboratory, Goto (1972) built a robot "hand" with two fingers, each with 14 outer contact sensors and four inner pressure-sensitive conductive rubber sensors that are able to pick up blocks located randomly on a table and pack them tightly onto a pallet.
A related and interesting accomplishment is the stenciling of moving boxes. In an experiment at SRI International, boxes were placed randomly on a moving conveyor and their position and orientation determined by a vision system. The visual information was used by a Unimate robot to place a stencil on the upper right corner of each box, spray the stencil with ink, then remove the stencil, thus leaving a permanent marking on each box (Rosen et al., 1976). An immediate extension of this technique would be to use the vision module to recognize a particular kind of box coming down the conveyor line, and then choose one Of many possible stencils which was the "name" of that kind of box. Then the stenciling could be further extended to objects in the boxes, say, parts, in which case the end result would be a robot capable of marking individual objects with something akin to a "universal product code" that warehouse or assembly robots could readily identify and recognize.
Automated Transport Vehicles
Automated Transport Vehicles (ATVs), or "parts carts," are responsible for physically moving parts and subassemblies between sectors, between robot assembly stations, and in and out of warehouses in various locations throughout the LMF. Mobile carriers of the sophistication required for the lunar seed do not exist, but should be capable of development within a decade given the present strong interest in developing totally automated factories on Earth.
Luke (1972) describes a tow-cart system designed by SI Handling Systems, Inc., for use in manufacturing plants. These "switch-carts" serve as mobile workbenches for assembly, testing and inspection, and for carrying finished products to storage, shipping areas, or to other work areas. Carts can be unloaded manually or automatically, or loaded, then "reprogrammed" for other destinations. However, these carts are passive machines - they cannot load or unload themselves and they have no feedback to monitor their own condition (have they just tipped over, lost their load, had a load shift dangerously, etc.?) They have no means of remote communication with a centralized source of control, and all destination programming is performed manually. The ideal system would include vision and touch sensors, a loading/unloading crane, vestibular or "balance" sensors, an onboard microcomputer controller, and a radio link to the outside. This link could be used by the ATV to periodically report its status, location, and any malfunctions, and it could be used by the central factory computer to inform the ATV of traffic conditions ahead, new routes, and derailed or damaged machines ahead to avoid or to assist.
A major step forward was the now legendary "Shakey" robot, an SRI project during 1968-1972 (Raphael et al., 1971). Shakey was, in essence, a prototype mobile robot cart equipped with a TV camera, rangefinder, and radio link to a central computer. The system could be given, and would successfully execute, such simple tasks as finding a box of a certain size, shape, and color, and pushing it to a designated position. The robot could form and execute simple plans for navigating rooms, doorways, and floors littered with the large blocks. Shakey was programmed to recover from certain unforeseen circumstances, cope with obstacles, store (learn) generalized versions of plans it produced for later use, and to execute preliminary actions and pursuance of principal goals. (In one instance, Shakey figured out that by moving a ramp a few feet it could climb up onto a platform where the box it needed to move was resting.) The robot also carried out a number of manipulative functions in cooperation with a Unimate robot arm Shakey had no manipulators of its own.
Work of a similar nature is now in progress in French laboratories. For example, the mobile robot HILARE is a modular, triangular, and computer-controlled mobile cart equipped with three wheels (two of them motor-driven), an onboard microcomputer, a sophisticated sensor bank (vision, infrared, ultrasonic sonar/proximity, and telemetry laser), and in the future a manipulator arm will be added (Prajoux, 1980). HILARE's control systems include "expert modules" for object identification, navigation, exploration, itinerary planning, and sensory planning.
The Japanese have also made significant progress in this area. One design is an amazing driverless "intelligent car" that can drive on normal roads at speeds up to 30 km/hr, automatically avoiding stationary obstacles or stopping if necessary (Tsugawa et al., 1979). Other Japanese mobile robot systems under development can find pathways around people walking in a hallway (Tsukiyama and Shirai, 1979), and can compute tile relative velocities and distances of cars in real time to permit a robot car to be able to operate successfully in normal traffic (Sato, 1979).
Automated Warehouse Robors
Workpieces and other objects delivered to LMF warehouse facilities for storage must be automatically stowed away properly, and later expeditiously retrieved, by the warehouse robots. Numerous advanced and successful automated warehouse systems have already been installed in various commercial operations. A typical system in use at Rohr Corporation efficiently utilizes space and employs computer-controlled stacker cranes to store and retrieve standardized pallets (Anderson. 1972). The computer keeps records on the entire inventory present at any given time as well as the status of all parts ingoing and outgoing.
Similar techniques were used in the semiautomated "pigeonhole" storage systems for sheet metal and electric motors (in the 3/4 to 30 hp range) first operated by Reliance Steel and Aluminum Company decades ago. Each compartment contained one motor or up to 2250 kg of flat precut aluminum, magnesium, or high-finish stainless or galvanized steel stored on pallets. Retrieval time was about 1 min for the motors and about 6 min for the entire contents of a sheet metal compartment (Foster, 1963; Luke, 1972).
The technology in this area appears not to be especially difficult, although a "custom" system obviously must be designed for the peculiarities of lunar operations.
Mobile Assembly and Repair Robots
A Mobile Assembly and Repair Robot (MARR) must take complex preassembled parts (motors, cameras, microcomputers, robot arms, pumps) and perhaps a limited number of simple parts (bolts, washers, gears, wires, or springs) and assemble complete working LMF machines (mining robots, materials processing machines, warehouse robots, new MARRs). A MARR requires mobility, because it easily permits complex assembly of large interconnected systems and allows finished machines to be assembled in situ wherever needed in any LMF sector (Hollis, 1977). A MARR needs full mobility independent of specialized tracks or roadways, a wide range of sophisticated sensors (including stereo vision, IR and UV, radar and microwave, and various contact, contour, and texture sensing capabilities) mounted on flexible booms perhaps 4 m long. MARRs also require at least one "cherry picker" crane, a minimum of two heavy-duty manipulator arms, two light-duty manipulator arms with precision end-effectors, and a wide selection of tools (e.g., screwdrivers, rivet guns, shears, soldering gun, and wrenches). A radio link and onboard computer-controller are also essential.
MARRs have an omnibus mission illustrated by the diversity of the following partial list of tasks:
According to van Cleave (1977), when General Motors began to consider the design of automated assembly systems for automobiles "the assembly of vehicles was rejected as being too complex for the time being so studies are confined to subassemblies." This area is identified as a major potential technology driver - insufficient research has been conducted on the development of systems for complete automated final assembly of working machines from subassemblies in an industrial production setting.
For instance, at General Motors Research Laboratories the most progress made to date is an experimental system to mount wheels on automobiles (Olsztyn, 1973). The location of the studs on the hubs and the stud holes on the wheels were determined using a TV camera coupled to a computer, and then a special manipulator mounted the wheel on the hub and engaged the studs in the appropriate holes. According to Rosen and Nitzan (1977), "although this experiment demonstrated the feasibility of a useful task, further development is needed to make this system cost-effective." The prospects for semiautonomous assembly robots have recently been favorably reviewed by Leonard (1980).
In Japan, much recent work has dealt with the design and construction of robot "hands" of very high dexterity of the sort which might be needed for fine precision work during delicate final assembly and other related tasks. Takese (1979) has developed a two-arm manipulator able to do tasks requiring cooperation between the arms - such as turning a crank, boring a hole with a carpenter's brace and bit, sawing wood, driving nails with a hammer, and several other chores. Okada (1979), also of the Electrotechnical Laboratory in Tokyo, has devised a three-fingered robot hand of incredible dexterity. Each finger has three joints. The hand of Okada's robot can tighten nuts on a threaded shaft, shift a cylindrical bar from side to side while holding it vertically, slowly twirl a small baton, and rotate a ball while holding it. Further research will extend into more complex movements such as tying a knot, fastening buttons, and using chopsticks.
Although some of the needed technologies for final assembly are slowly becoming available, many are not. Further, no attempt has yet been made to produce a final assembly robot, let alone a truly universal final assembly robot such as the MARRs required for the LMF. Such is a leap beyond even the ambitious Japanese MUM program mentioned in appendix 5F - even MUM envisions a minimum continuing human presence within the factory.
Conceptually, final assembly seems not intractable - a typical machine can be broken down into perhaps a few dozen basic subassemblies. But little research has been done so potential difficulties remain largely unknown. Major problem areas may include verification and debugging, subassembly presentation and recognition, actual subassembly interconnection or complex surfaces mating, and heavy lifting; today flexible robot arms capable of lifting much more than their own weight quickly, accurately, and dexterously do not exist.
The MARR system is a major R&D area which must be explored further before LMF design or deployment may practically be attempted.
5G.2 Assembly and LMF Computer Control
As with other sectors, LMF assembly is controlled by a computer which directs the entire factory. The assembly sector minicomputer,.on the other hand, directs the many microcomputers which control its various assembly robots, transport robots, and warehouse robots. The entire manufacturing system is thus controlled by a hierarchy of distributed computers, and can simultaneously manufacture subsets of groups of different products after fast, simple retraining exercises either Programmed by an "intelligent" central computer or remotely by human beings. Plant layout and production scheduling are optimized to permit maximum machine utilization and speed of manufacturing, and to minimize energy consumption, inventories, and wastage (Merchant, 1975).
Merchant (1973) suggests that a fully automatic factory capable of producing and assembling machined parts will consist of modular manufacturing subsystems, each controlled by a hierarchy of micro- and minicomputers interfaced with a larger central computer. The modular subsystems must perform seven specific manufacturing functions:
Such a completely computer-integrated factory does not yet exist, though various major components of this kind of system have been constructed and are in use in industry in the United States, Europe, and Japan. The most ambitious plan to reach Merchant's level of full automation is the Japanese MUM program which aims at "unmanned manufacturing" (computer-controlled operations, man-controlled maintenance) in the 1980-1985 time frame and "complete automatic manufacturing" (computer-controlled operations and maintenance) by 2000-2005 (Honda, 1974).
According to advanced planning notes, the most advanced and expensive MUM system would be "metabolic," "capable of being expanded," and "capable of self-diagnosis and self-reproduction.... With a built-in microcomputer, it is a self-diagnosis and self-reproduction system which can inspect functional deteriorations or abnormal conditions and exchange machine elements for identical ones. It is a hierarchy-information system with built-in microcomputer, middle computer, and central control computer. It can alleviate the burden on the central computer, and is capable of rapid disposal in case the computer fails. It is also capable of expansion" (Honda, 1974). Plans to Open an automated robot-making factory at Fujitsu in accordance with the MUM philosophy are proceeding smoothly (see appendix 5F).
5G.3 Sector Mass and Power Estimates
A set of mass and power estimates for assembly systems was obtained from several sources and is displayed in table 5.20. Taking the extremes in each range, and given the known required throughput rate to replicate the original LMF seed in 1 year, we find that mass of assembly sector machinery lies between 83-1100 kg and the power consumption between 0.083-19 kW. If the warehouse robots and their fixed plant have a mass of about 1% of the stored goods (parts for an entire 100-ton seed) and a power requirement of about 10 W/kg, their mass is about 1 ton and their power draw about 10 kW.
Table 5.20.- Mass And Power Estimates For Assembly Systems From Various Sources
|Source||Plant mass, kg/kg per sec output||Plant power, W/kg plant|
|Johnson and Holbrow (1977) - Bulk processing and heavy industry estimate for human workers||4.3X104||2|
|Criswell (l980) - for "Cold Macro Assembly"||3.6X105||1|
|PUMA (1980) arm and controller computer, assuming 88 kg mass, 1500 W power, speed 1 part/30 sec assembly, part mass 0.1 kg/part||2.6X104||17|
The automated transport vehicles may have to carry the entire seed mass as often as ten times during the course of a year's growth, replication, or production. This is a hauling rate of 3.2X10-2 kg/sec or 0.32 parts/sec. If the average trip for an ATV is 100 m (initial seed diam), with a mean velocity of 1 km/hr (taking account of downtime for repairs, reprogramming, on- and off-loading, rescues, etc.), then the ATV trip time is 360 sec (6 min) and the average load is 11.5 kg/trip or 115 "typical parts"/trip. While a properly designed hauler should be capable of bearing at least its own weight in freight, ATVs require special equipment for manipulation rather than hauling. A conservative estimate for the ATV fleet is 100-1000 kg. If a typical vehicle power consumption is 20 (J/m)/kg (Freitas, 1980), the power requirement for the fleet is 0.56 to 5.6 kW total.
As for MARRs, the "warden" robots in the Project Daedalus BIS starship study (Martin, 1978) served a similar function and were allocated to the main vessel in the amount of 10-7 robots/kg-year serviced. To service a 100-ton LMF Seed for a century would require one "warden" of mass 1 ton and a power draw of 10 W/kg. Conservatively assigning one MARR each to chemical processing sector, parts and electronics fabrication sectors, and assembly sector results in a total mass of 4 tons and draws 40 kW of power for the fleet of four MARRs. The main seed computer has a mass of 2200 kg, with 22.2X10-2 kg computer/kg serviced as in Martin (1978). With 17 W/kg as for the PUMA robot arm controller computer (Spalding, personal communication, 1980), seed computer power requirements are 37 kW.
5G.4 Information and Control Estimates
The team assumed that the assembly of a typical part may be described by 104 bits (about one page of printed text), an extremely conservative estimate judging from the instructions printed in Ford Truck (1960) and Chilton (1971), and especially if the seed has only 1000 different kinds of parts. Thus (104 bits/part)(106 parts/seed) = 1010 bits to permit the assembly sector to assemble the entire initial seed. To operate the sector may require an order less capacity than that needed for complete self-description, about 109 bits. Applying similar calculations to other sector subsystems gives the estimates tabulated in table 5.1 - ATVs lie between mining and paving robots in complexity, and warehoused parts, each labeled by 100 bits, require a total of 108 bits for identification, and perhaps an order of magnitude less for the computer controller that operates the warehouse and its robots.
Abraham , Richard G.; and Beres, James F.: Cost-Effective Programmable Assembly Systems. Paper presented at the 1st North American Industrial Robot Conference, 26-28 October 1976. Reprinted in William R. Tanner, ed., Industrial Robots, Volume 2: Applications, Society of Manufacturing Engineers, Dearborn, Michigan, 1979, pp.213-236.
Abraham, Richard G.; and Shum, L. Y.: Robot Are Welder with Contouring Teach Mode. In Proc. 5th International Symposium on Industrial Robots, IIT Research Institute, Chicago, Illinois, September 1975,Society of Manufacturing Engineers, Dearborn, Mich., 1975, pp. 239-258.
Agin, Gerald J.; and Duda, Richard O.: SRI Vision Research for Advanced Industrial Automation. In 2nd USA-Japan Computer Conference, Session 5-4-5, 1975, pp. 113-117. Proceedings, Aug. 26-28, 1975. American Federation of Information Processing Societies, Montvale, N.J., 1975.
Ambler, A. P.; Barrow, H. C.; Brown, C. M.; Bonstall, R. M.; Popplestone, R. J.: A Versatile System for Computer-Controlled Assembly. Artificial Intelligence, vol. 6, Summer 1975, PP.129-156.
Anderson, R. H.: Programmable Automation: The Future of Computers in Manufacturing. Datamation, VOI. L8, December 1972,pp.46-52. Ansley, Arthur C.: Manufacturing Methods and Processes. Chilton Book Company, Philadelphia, 1968. Revised and enlarged edition.
Bolles, R. C.; and Paul, R.: The Use of Sensory Feedback in a Programmable Assembly System. Computer Science Department, Stanford Univ.. Stanford, California, October 1973. Stan-CS-73-396. AD-772064
Chilton's Auto Repair Manual, 1964-1971: Chilton Book Company, Philadelphia, 1971.
Clarke, Arthur C.: The Promise of Space. Harper and Row Publ., New York, 1968.
Cliff, Rodger A.: An Hierarchical System Architecture for Automated Design, Fabrication, and Repair. Paper presented at the 5th Princeton/AIAA/SSI Conference on Space Manufacturing, 18-21 May 1981, Princeton, NJ.
Criswell, David R.: Extraterrestrial Materials Processing and Construction. NSR 09-051-001 Mod. 24, Final Report, 31 January 1980.
Ejiri, M. et al.: An Intelligent Robot with Cognition and Decision-Making Ability. Proc. 2nd Intl. Joint Conf. on Artificial Intelligence, London, Sept. 1971, British Computer Society, London, 1971, pp. 350-358.
Feldman, J. et al.: The Use of Vision and Manipulation to Solve the Instant Insanity Puzzle. Proc. 2nd Intl. Joint Conf. on Artificial Intelligence, London, Sept. 1971. British Computer Society, London, 1971, pp. 359-364.
Ford Truck Shop Manual: Ford Motor Company, 1960.
Foster, David B.: Modem Automation. Pitman, London, 1963.
Freitas, Robert A., Jr.: A Self-Reproducing Interstellar Probe. J. British Interplanet. Sec., vol. 33, July 1980, pp.251-264.
Goto, T.: Compact Packaging by Robot with Tactile Sensors. Proc. 2nd Intl. Symp. Industrial Robots, Chicago, 1972. IIT Research Institute, Chicago, Illinois, 1972.
Goto, T.; Inoyama, T.; and Takeyasu, K.: Precise Insert Operation by Tactile Controlled Robot HI-T-HAND Expert-2. Proc. 4th Intl. Symp. Industrial Robots, Tokyo, November 1974, pp. 209-218.
Hart, Peter E.: Progress on a Computer Based Consultant. SRI International Publication 711, January 1975.
Heginbotham, W. B.; Kitchin, P. W.; and Pugh, A.: Visual Feedback Applied to Programmable Assembly Machines. Proc. 2nd Int. Symp. Industrial Robots, Chicago, 1972. IIT Research Institute, Chicago, Illinois, 1972, pp. 77-88.
Heppenheimer, T. A.: Colonies in Space. Stackpole Books, PA, 1977.
Hollis, Ralph: NEWT: A Mobile, Cognitive Robot. Byte, vol. 2, June 1977, pp. 30-45.
Honda, Fujio, ed.: Methodology for Unmanned Metal Working Factory. Project Committee of Unmanned Manufacturing System Design, Bulletin of Mechanical Engineering Laboratory No. 13,Tokyo, 1974.
Inoue, H.: Computer Controlled Bilateral Manipulator. Bull. Japanese Sec. Mech. Eng., vol. 14, March 1971, pp 199-207.
Johnson, Richard D.; and Holbrow, Charles, eds.: Space Settlements: A Design Study, NASA SP-413, 1977. 185 pp.
Kirsch, Jerry: Progression of Intelligence in Limited Sequence Robots. Paper presented at the 1st North American Industrial Robot Conference, 26-28 October 1976.
Leonard, Raymond S.: Automated Construction of Photovoltaic Power Plants. Paper prepared as in-house document, Bechtel National, Inc., San Francisco, California, 1980. 36 pp.
Luke, Hugh D.: Automation for Productivity. Wiley, New York, 1972.
Martin, A. R., ed.: Project Daedalus - The Final Report on the BIS Starship Study. British Interplanetary Sec., 1978. (J. British Interplanetary Sec., Supplement 1978.)
McGhie, Dennis; and Hill, John W.: Vision-Controlled Subassembly Station. Paper delivered at Robots III Conference, Chicago, Illinois, 7-9 November 1978.
Merchant, M. Eugene: The Future of CAM Systems. In National Computer Conference and Exposition, Anahelm, 1975. AFIPS Conference Proceedings, vol. 44, 1975,pp. 793-799.
Merchant, M. E.: The Future of Batch Manufacture. Phil. Trans. Royal Sec. London, vol. 275A, 1973, pp.357-372.
Nevins, James L.; and Whitney, Daniel E.: Computer Controlled Assembly. Scientific American, vol. 238, February ]978, pp. 62-74.
Nitzan, David: Robotic Automation at SRI. Proc. of IEEE Midwest Conference. MIDCON/79, Chicago, Illinois, 6-8 November 1979. Western Periodicals, Hollywood, California, 1979, Paper 5.1.
Nitzan, D.; Brain, A. E.; and Duda, R. O.: The Measurement and Use of Registered Reflectance and Range Data in Scene Analysis. Proc. IEEE, vol. 65, February 1977, pp.206-220.
Nitzan, David; and Rosen, Charles A.: Programmable Industrial Automation. IEEE Trans. Computers, vol. C-25, December 1976, pp. 1259-1270.
Nitzan, D.; Rosen, C.; Agin, C.; Bolles, R.; Gleason, G.; Hill, J.; McGhie, D.; Prajoux, R.; Park, W.; and Sword, A.: Machine Intelligence Research Applied to Industrial Automation. 9th Report, SRI International, August 1979.
Okada, Tokuji: A Versatile End-Effector with Flexible Fingers. Robotics Age, vol. 1, Winter 1979, pp. 31, 3-39.
Olsztyn, J. T., et al.. An Application of Computer Vision to a Simulated Assembly Task. Proc. 1st International Joint Conference on Pattern Recognition, Washington, D.C., 1973, pp. 505-513.
Perkins, W. A.: Multilevel Vision Recognition System. 3rd International Joint Conference on Pattern Recognition, Coronado, Calif., 8-11, 1976. New York, IEEE, 1976, PP. 739-744.
Perkins, W. A.: Model-Based Vision System for Scenes Containing Multiple Parts. General Motors Research Laboratories Publication GMR-2386, June 1977a.
Perkins, W. A.: A Model-Based Vision System for Industrial Parts. General Motors Research Laboratories Publication GMR-2410, June 1977b.
Prajoux, Roland: Robotics Research in France. Robotics Age, vol. 2, Spring 1980, pp. 16-26.
Rapllael, B. et al.. Research and Applications - Artificial Intelligence. NASA CR-131991, 1971.
Rosen, Charles A.: Machine Vision and Robotics: Industrial Requirements. in Computer Vision and Sensor-Based Robots, George G. Dodd, Lothar Rossol, eds., Plenum Publ. Co., 1979, pp. 3-20, 20-22 (discussion).
Rosen, Charles A.; and Nitzan, David: Use of Sensors in Programmable Automation, 'd. 10, December 1977, pp. 12-.13.
Rosen, C. A.; Agin, C.; Andeen, G.; and Berger, J.: Machine Intelligence Research Applied to Industrial Automation. 6th Report, SRI International, November 1976. PB-289827/8, NSF/RA-761655.
Rosen, C.; Nitzan, D.; Agin, G.; Bavarsky, A.; Gleason, G.; Hill, J.; McGhie, D.; and Park, W.: Machine Intelligence Research Applied to Industrial Automation. 8th Report, SRI International, August 1978.
Sato, T.: Automotive Stereo Vision Using Deconvolution Technique. From 6th Intl. Joint Conf. on Artificial Intelligence, Tokyo, Japan, 1979. Stanford Univ., Computer Science Dept., Stanford, Calif., 1979.
Seko, K.; and Toda, H.: Development and Application Report in the Are Welding and Assembly Operation by the High performance Robot. In Proc. 4th Intl. Symp. Industrial Robots, Tokyo, Japan, November 1974, pp. 487-596. Japan Industrial Robot Assn., Tokyo, 1974.
Shapiro, Sydney F.: Digital Technology Enables Robots to See. Computer Design, January 1978. Reprinted in William R. Tanner, ed., Industrial Robots, Volume 2: Applications, Society of Manufacturing Engineers, Dearborn, Michigan, 1979,pp.271-276.
Takeda, S.: Study of Artificial Tactile Sensors for Shape Recognition - Algorithm for Tactile Data Input. Proc. 4th Intl. Symp. Industrial Robots, Tokyo, Japan, November 1974, Japan Industrial Robot Assn., Tokyo, 1974, pp.199-208
Takese, Kunikatsu: Force Control of a Multi-Jointed Robot Arm. Robotics Age, vol. 1,Winter 1979, pp 30, 32-36.
Thompson, Terrence: I See, Said the Robot. Assembly Engineering, Hitchcock Publ. Co., 1978. Reprinted in William R. Tanner, ed., Industrial Robots, Volume 2: Applications, Society of Manufacturing Engineers, Dearborn, Michigan, 1979, pp. 265-270
Tsugawa, S. et al.: An Automobile with Artificial Intelligence. Paper delivered at 6th Intl. Joint Conf. on Artificial Intelligence, Tokyo, Japan, 1979. international Joint Conference on Artificial Intelligence, 1979, pp.893-895.
Tsukiyama, T.; and Shirai,Y.: Detection of the Movements of Men for Autonomous Vehicles. From 6th Intl. Joint Conf. on Artificial Intelligence, Tokyo, 1979. International Joint Conference on Artificial Intelligence, 1979.
Van Cleave, David A.: One Big Step for Assembly in the Sky. Iron Age, vol. 28, November 1977. Reprinted in William R. Tanner, ed., Industrial Robots, Volume 2: Applications, Society of Manufacturing Engineers, Dearborn, Michigan, 1979, pp.209-212.
Will, P. M.; and Grossman, D. D.: An Experimental System for Computer Controlled Mechanical Assembly. IEEE Transactions on Computers, vol. C-24, September 1975, pp.879-888
Winston, P H: The MIT Robot. In Machine Intelligence, B. Meltzer, D. Michie, eds., vol. 7, Edinburgh Univ.
Yachida, M.; and Tsuji, S.: A Machine Vision for Complex Industrial Parts with Learning Capacity. Proc. 4th intl. Joint Conference on Artificial Intelligence, Tbilisi, Sept 1975. Artificial Intelligence Laboratory, Publications Dept., MIT, Cambridge, Mass., 1975.
Yoda, H.; Ikeda, S.; Ejiri, M.: New Attempt of Selecting Objects Using a Hand-Eye System. Hitachi Review, vol. 22, no. 9, 1973, pp. 362-365.